00:00:00.001 Started by upstream project "autotest-spdk-master-vs-dpdk-v22.11" build number 2234 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3497 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.151 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.152 The recommended git tool is: git 00:00:00.152 using credential 00000000-0000-0000-0000-000000000002 00:00:00.156 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.224 Fetching changes from the remote Git repository 00:00:00.225 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.291 Using shallow fetch with depth 1 00:00:00.291 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.291 > git --version # timeout=10 00:00:00.340 > git --version # 'git version 2.39.2' 00:00:00.340 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.366 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.366 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:06.909 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:06.923 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:06.935 Checking out Revision 53a1a621557260e3fbfd1fd32ee65ff11a804d5b (FETCH_HEAD) 00:00:06.935 > git config core.sparsecheckout # timeout=10 00:00:06.948 > git read-tree -mu HEAD # timeout=10 00:00:06.965 > git checkout -f 53a1a621557260e3fbfd1fd32ee65ff11a804d5b # timeout=5 00:00:06.984 Commit message: "packer: Merge irdmafedora into main fedora image" 00:00:06.984 > git rev-list --no-walk 53a1a621557260e3fbfd1fd32ee65ff11a804d5b # timeout=10 00:00:07.111 [Pipeline] Start of Pipeline 00:00:07.122 [Pipeline] library 00:00:07.123 Loading library shm_lib@master 00:00:07.123 Library shm_lib@master is cached. Copying from home. 00:00:07.139 [Pipeline] node 00:00:07.151 Running on VM-host-SM9 in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:00:07.152 [Pipeline] { 00:00:07.161 [Pipeline] catchError 00:00:07.162 [Pipeline] { 00:00:07.170 [Pipeline] wrap 00:00:07.177 [Pipeline] { 00:00:07.185 [Pipeline] stage 00:00:07.186 [Pipeline] { (Prologue) 00:00:07.198 [Pipeline] echo 00:00:07.199 Node: VM-host-SM9 00:00:07.203 [Pipeline] cleanWs 00:00:07.210 [WS-CLEANUP] Deleting project workspace... 00:00:07.210 [WS-CLEANUP] Deferred wipeout is used... 00:00:07.216 [WS-CLEANUP] done 00:00:07.452 [Pipeline] setCustomBuildProperty 00:00:07.553 [Pipeline] httpRequest 00:00:08.132 [Pipeline] echo 00:00:08.134 Sorcerer 10.211.164.101 is alive 00:00:08.143 [Pipeline] retry 00:00:08.145 [Pipeline] { 00:00:08.159 [Pipeline] httpRequest 00:00:08.163 HttpMethod: GET 00:00:08.164 URL: http://10.211.164.101/packages/jbp_53a1a621557260e3fbfd1fd32ee65ff11a804d5b.tar.gz 00:00:08.164 Sending request to url: http://10.211.164.101/packages/jbp_53a1a621557260e3fbfd1fd32ee65ff11a804d5b.tar.gz 00:00:08.183 Response Code: HTTP/1.1 200 OK 00:00:08.184 Success: Status code 200 is in the accepted range: 200,404 00:00:08.184 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp_53a1a621557260e3fbfd1fd32ee65ff11a804d5b.tar.gz 00:00:28.258 [Pipeline] } 00:00:28.276 [Pipeline] // retry 00:00:28.285 [Pipeline] sh 00:00:28.567 + tar --no-same-owner -xf jbp_53a1a621557260e3fbfd1fd32ee65ff11a804d5b.tar.gz 00:00:28.584 [Pipeline] httpRequest 00:00:29.040 [Pipeline] echo 00:00:29.042 Sorcerer 10.211.164.101 is alive 00:00:29.054 [Pipeline] retry 00:00:29.057 [Pipeline] { 00:00:29.074 [Pipeline] httpRequest 00:00:29.079 HttpMethod: GET 00:00:29.080 URL: http://10.211.164.101/packages/spdk_09cc66129742c68eb8ce46c42225a27c3c933a14.tar.gz 00:00:29.081 Sending request to url: http://10.211.164.101/packages/spdk_09cc66129742c68eb8ce46c42225a27c3c933a14.tar.gz 00:00:29.098 Response Code: HTTP/1.1 200 OK 00:00:29.099 Success: Status code 200 is in the accepted range: 200,404 00:00:29.100 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk_09cc66129742c68eb8ce46c42225a27c3c933a14.tar.gz 00:01:16.661 [Pipeline] } 00:01:16.679 [Pipeline] // retry 00:01:16.687 [Pipeline] sh 00:01:16.970 + tar --no-same-owner -xf spdk_09cc66129742c68eb8ce46c42225a27c3c933a14.tar.gz 00:01:19.519 [Pipeline] sh 00:01:19.803 + git -C spdk log --oneline -n5 00:01:19.803 09cc66129 test/unit: add mixed busy/idle mock poller function in reactor_ut 00:01:19.803 a67b3561a dpdk: update submodule to include alarm_cancel fix 00:01:19.803 43f6d3385 nvmf: remove use of STAILQ for last_wqe events 00:01:19.803 9645421c5 nvmf: rename nvmf_rdma_qpair_process_ibv_event() 00:01:19.803 e6da32ee1 nvmf: rename nvmf_rdma_send_qpair_async_event() 00:01:19.823 [Pipeline] withCredentials 00:01:19.833 > git --version # timeout=10 00:01:19.846 > git --version # 'git version 2.39.2' 00:01:19.862 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:01:19.864 [Pipeline] { 00:01:19.873 [Pipeline] retry 00:01:19.874 [Pipeline] { 00:01:19.890 [Pipeline] sh 00:01:20.171 + git ls-remote http://dpdk.org/git/dpdk-stable v22.11.4 00:01:20.182 [Pipeline] } 00:01:20.200 [Pipeline] // retry 00:01:20.206 [Pipeline] } 00:01:20.223 [Pipeline] // withCredentials 00:01:20.233 [Pipeline] httpRequest 00:01:20.636 [Pipeline] echo 00:01:20.638 Sorcerer 10.211.164.101 is alive 00:01:20.648 [Pipeline] retry 00:01:20.650 [Pipeline] { 00:01:20.665 [Pipeline] httpRequest 00:01:20.669 HttpMethod: GET 00:01:20.670 URL: http://10.211.164.101/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:20.670 Sending request to url: http://10.211.164.101/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:20.674 Response Code: HTTP/1.1 200 OK 00:01:20.675 Success: Status code 200 is in the accepted range: 200,404 00:01:20.676 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:58.401 [Pipeline] } 00:01:58.418 [Pipeline] // retry 00:01:58.426 [Pipeline] sh 00:01:58.724 + tar --no-same-owner -xf dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:02:00.108 [Pipeline] sh 00:02:00.389 + git -C dpdk log --oneline -n5 00:02:00.389 caf0f5d395 version: 22.11.4 00:02:00.389 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:02:00.389 dc9c799c7d vhost: fix missing spinlock unlock 00:02:00.389 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:02:00.389 6ef77f2a5e net/gve: fix RX buffer size alignment 00:02:00.405 [Pipeline] writeFile 00:02:00.421 [Pipeline] sh 00:02:00.698 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:02:00.709 [Pipeline] sh 00:02:00.987 + cat autorun-spdk.conf 00:02:00.987 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:00.987 SPDK_TEST_NVMF=1 00:02:00.987 SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:00.987 SPDK_TEST_URING=1 00:02:00.987 SPDK_TEST_USDT=1 00:02:00.987 SPDK_RUN_UBSAN=1 00:02:00.987 NET_TYPE=virt 00:02:00.987 SPDK_TEST_NATIVE_DPDK=v22.11.4 00:02:00.987 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:00.987 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:00.994 RUN_NIGHTLY=1 00:02:00.996 [Pipeline] } 00:02:01.009 [Pipeline] // stage 00:02:01.022 [Pipeline] stage 00:02:01.024 [Pipeline] { (Run VM) 00:02:01.034 [Pipeline] sh 00:02:01.311 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:02:01.311 + echo 'Start stage prepare_nvme.sh' 00:02:01.311 Start stage prepare_nvme.sh 00:02:01.311 + [[ -n 3 ]] 00:02:01.311 + disk_prefix=ex3 00:02:01.311 + [[ -n /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest ]] 00:02:01.311 + [[ -e /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf ]] 00:02:01.311 + source /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf 00:02:01.311 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:01.311 ++ SPDK_TEST_NVMF=1 00:02:01.311 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:01.311 ++ SPDK_TEST_URING=1 00:02:01.311 ++ SPDK_TEST_USDT=1 00:02:01.311 ++ SPDK_RUN_UBSAN=1 00:02:01.311 ++ NET_TYPE=virt 00:02:01.311 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:02:01.311 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:01.311 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:01.311 ++ RUN_NIGHTLY=1 00:02:01.311 + cd /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:02:01.311 + nvme_files=() 00:02:01.311 + declare -A nvme_files 00:02:01.311 + backend_dir=/var/lib/libvirt/images/backends 00:02:01.311 + nvme_files['nvme.img']=5G 00:02:01.311 + nvme_files['nvme-cmb.img']=5G 00:02:01.311 + nvme_files['nvme-multi0.img']=4G 00:02:01.311 + nvme_files['nvme-multi1.img']=4G 00:02:01.311 + nvme_files['nvme-multi2.img']=4G 00:02:01.311 + nvme_files['nvme-openstack.img']=8G 00:02:01.311 + nvme_files['nvme-zns.img']=5G 00:02:01.311 + (( SPDK_TEST_NVME_PMR == 1 )) 00:02:01.311 + (( SPDK_TEST_FTL == 1 )) 00:02:01.311 + (( SPDK_TEST_NVME_FDP == 1 )) 00:02:01.311 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:02:01.311 + for nvme in "${!nvme_files[@]}" 00:02:01.311 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-multi2.img -s 4G 00:02:01.311 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:02:01.311 + for nvme in "${!nvme_files[@]}" 00:02:01.311 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-cmb.img -s 5G 00:02:01.312 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:02:01.312 + for nvme in "${!nvme_files[@]}" 00:02:01.312 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-openstack.img -s 8G 00:02:01.312 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:02:01.312 + for nvme in "${!nvme_files[@]}" 00:02:01.312 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-zns.img -s 5G 00:02:01.312 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:02:01.312 + for nvme in "${!nvme_files[@]}" 00:02:01.312 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-multi1.img -s 4G 00:02:01.312 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:02:01.312 + for nvme in "${!nvme_files[@]}" 00:02:01.312 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-multi0.img -s 4G 00:02:01.312 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:02:01.312 + for nvme in "${!nvme_files[@]}" 00:02:01.312 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme.img -s 5G 00:02:01.570 Formatting '/var/lib/libvirt/images/backends/ex3-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:02:01.570 ++ sudo grep -rl ex3-nvme.img /etc/libvirt/qemu 00:02:01.570 + echo 'End stage prepare_nvme.sh' 00:02:01.570 End stage prepare_nvme.sh 00:02:01.581 [Pipeline] sh 00:02:01.858 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:02:01.858 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex3-nvme.img -b /var/lib/libvirt/images/backends/ex3-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex3-nvme-multi1.img:/var/lib/libvirt/images/backends/ex3-nvme-multi2.img -H -a -v -f fedora39 00:02:01.858 00:02:01.858 DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/scripts/vagrant 00:02:01.858 SPDK_DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk 00:02:01.858 VAGRANT_TARGET=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:02:01.858 HELP=0 00:02:01.858 DRY_RUN=0 00:02:01.858 NVME_FILE=/var/lib/libvirt/images/backends/ex3-nvme.img,/var/lib/libvirt/images/backends/ex3-nvme-multi0.img, 00:02:01.858 NVME_DISKS_TYPE=nvme,nvme, 00:02:01.858 NVME_AUTO_CREATE=0 00:02:01.858 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex3-nvme-multi1.img:/var/lib/libvirt/images/backends/ex3-nvme-multi2.img, 00:02:01.858 NVME_CMB=,, 00:02:01.858 NVME_PMR=,, 00:02:01.858 NVME_ZNS=,, 00:02:01.858 NVME_MS=,, 00:02:01.858 NVME_FDP=,, 00:02:01.858 SPDK_VAGRANT_DISTRO=fedora39 00:02:01.858 SPDK_VAGRANT_VMCPU=10 00:02:01.858 SPDK_VAGRANT_VMRAM=12288 00:02:01.858 SPDK_VAGRANT_PROVIDER=libvirt 00:02:01.858 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:02:01.858 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:02:01.858 SPDK_OPENSTACK_NETWORK=0 00:02:01.858 VAGRANT_PACKAGE_BOX=0 00:02:01.858 VAGRANTFILE=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:02:01.858 FORCE_DISTRO=true 00:02:01.858 VAGRANT_BOX_VERSION= 00:02:01.858 EXTRA_VAGRANTFILES= 00:02:01.858 NIC_MODEL=e1000 00:02:01.858 00:02:01.858 mkdir: created directory '/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt' 00:02:01.858 /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:02:04.389 Bringing machine 'default' up with 'libvirt' provider... 00:02:05.327 ==> default: Creating image (snapshot of base box volume). 00:02:05.327 ==> default: Creating domain with the following settings... 00:02:05.327 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1727762070_9796f4d98cb4ff191efb 00:02:05.327 ==> default: -- Domain type: kvm 00:02:05.327 ==> default: -- Cpus: 10 00:02:05.327 ==> default: -- Feature: acpi 00:02:05.327 ==> default: -- Feature: apic 00:02:05.327 ==> default: -- Feature: pae 00:02:05.327 ==> default: -- Memory: 12288M 00:02:05.327 ==> default: -- Memory Backing: hugepages: 00:02:05.327 ==> default: -- Management MAC: 00:02:05.327 ==> default: -- Loader: 00:02:05.327 ==> default: -- Nvram: 00:02:05.327 ==> default: -- Base box: spdk/fedora39 00:02:05.327 ==> default: -- Storage pool: default 00:02:05.327 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1727762070_9796f4d98cb4ff191efb.img (20G) 00:02:05.327 ==> default: -- Volume Cache: default 00:02:05.327 ==> default: -- Kernel: 00:02:05.327 ==> default: -- Initrd: 00:02:05.327 ==> default: -- Graphics Type: vnc 00:02:05.327 ==> default: -- Graphics Port: -1 00:02:05.327 ==> default: -- Graphics IP: 127.0.0.1 00:02:05.327 ==> default: -- Graphics Password: Not defined 00:02:05.327 ==> default: -- Video Type: cirrus 00:02:05.327 ==> default: -- Video VRAM: 9216 00:02:05.327 ==> default: -- Sound Type: 00:02:05.327 ==> default: -- Keymap: en-us 00:02:05.327 ==> default: -- TPM Path: 00:02:05.327 ==> default: -- INPUT: type=mouse, bus=ps2 00:02:05.327 ==> default: -- Command line args: 00:02:05.327 ==> default: -> value=-device, 00:02:05.327 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:02:05.327 ==> default: -> value=-drive, 00:02:05.327 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme.img,if=none,id=nvme-0-drive0, 00:02:05.327 ==> default: -> value=-device, 00:02:05.327 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:05.327 ==> default: -> value=-device, 00:02:05.327 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:02:05.327 ==> default: -> value=-drive, 00:02:05.327 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:02:05.327 ==> default: -> value=-device, 00:02:05.327 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:05.327 ==> default: -> value=-drive, 00:02:05.327 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:02:05.327 ==> default: -> value=-device, 00:02:05.327 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:05.327 ==> default: -> value=-drive, 00:02:05.327 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:02:05.327 ==> default: -> value=-device, 00:02:05.327 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:05.327 ==> default: Creating shared folders metadata... 00:02:05.327 ==> default: Starting domain. 00:02:06.708 ==> default: Waiting for domain to get an IP address... 00:02:24.793 ==> default: Waiting for SSH to become available... 00:02:24.793 ==> default: Configuring and enabling network interfaces... 00:02:27.326 default: SSH address: 192.168.121.222:22 00:02:27.326 default: SSH username: vagrant 00:02:27.326 default: SSH auth method: private key 00:02:29.229 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:02:35.792 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/dpdk/ => /home/vagrant/spdk_repo/dpdk 00:02:42.368 ==> default: Mounting SSHFS shared folder... 00:02:42.934 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:02:42.934 ==> default: Checking Mount.. 00:02:44.310 ==> default: Folder Successfully Mounted! 00:02:44.310 ==> default: Running provisioner: file... 00:02:44.878 default: ~/.gitconfig => .gitconfig 00:02:45.447 00:02:45.447 SUCCESS! 00:02:45.447 00:02:45.447 cd to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:02:45.447 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:02:45.447 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:02:45.447 00:02:45.457 [Pipeline] } 00:02:45.473 [Pipeline] // stage 00:02:45.483 [Pipeline] dir 00:02:45.483 Running in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt 00:02:45.485 [Pipeline] { 00:02:45.501 [Pipeline] catchError 00:02:45.503 [Pipeline] { 00:02:45.517 [Pipeline] sh 00:02:45.796 + vagrant ssh-config --host vagrant 00:02:45.796 + sed -ne /^Host/,$p 00:02:45.796 + tee ssh_conf 00:02:49.986 Host vagrant 00:02:49.986 HostName 192.168.121.222 00:02:49.986 User vagrant 00:02:49.986 Port 22 00:02:49.986 UserKnownHostsFile /dev/null 00:02:49.986 StrictHostKeyChecking no 00:02:49.986 PasswordAuthentication no 00:02:49.986 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:02:49.986 IdentitiesOnly yes 00:02:49.986 LogLevel FATAL 00:02:49.986 ForwardAgent yes 00:02:49.986 ForwardX11 yes 00:02:49.986 00:02:49.998 [Pipeline] withEnv 00:02:50.000 [Pipeline] { 00:02:50.012 [Pipeline] sh 00:02:50.289 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:02:50.290 source /etc/os-release 00:02:50.290 [[ -e /image.version ]] && img=$(< /image.version) 00:02:50.290 # Minimal, systemd-like check. 00:02:50.290 if [[ -e /.dockerenv ]]; then 00:02:50.290 # Clear garbage from the node's name: 00:02:50.290 # agt-er_autotest_547-896 -> autotest_547-896 00:02:50.290 # $HOSTNAME is the actual container id 00:02:50.290 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:02:50.290 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:02:50.290 # We can assume this is a mount from a host where container is running, 00:02:50.290 # so fetch its hostname to easily identify the target swarm worker. 00:02:50.290 container="$(< /etc/hostname) ($agent)" 00:02:50.290 else 00:02:50.290 # Fallback 00:02:50.290 container=$agent 00:02:50.290 fi 00:02:50.290 fi 00:02:50.290 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:02:50.290 00:02:50.301 [Pipeline] } 00:02:50.316 [Pipeline] // withEnv 00:02:50.323 [Pipeline] setCustomBuildProperty 00:02:50.336 [Pipeline] stage 00:02:50.338 [Pipeline] { (Tests) 00:02:50.355 [Pipeline] sh 00:02:50.634 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:02:50.905 [Pipeline] sh 00:02:51.183 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:02:51.456 [Pipeline] timeout 00:02:51.456 Timeout set to expire in 1 hr 0 min 00:02:51.458 [Pipeline] { 00:02:51.472 [Pipeline] sh 00:02:51.749 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:02:52.342 HEAD is now at 09cc66129 test/unit: add mixed busy/idle mock poller function in reactor_ut 00:02:52.352 [Pipeline] sh 00:02:52.629 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:02:52.946 [Pipeline] sh 00:02:53.229 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:53.503 [Pipeline] sh 00:02:53.780 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvmf-tcp-uring-vg-autotest ./autoruner.sh spdk_repo 00:02:54.039 ++ readlink -f spdk_repo 00:02:54.039 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:54.039 + [[ -n /home/vagrant/spdk_repo ]] 00:02:54.039 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:54.039 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:54.039 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:54.039 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:54.039 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:54.039 + [[ nvmf-tcp-uring-vg-autotest == pkgdep-* ]] 00:02:54.039 + cd /home/vagrant/spdk_repo 00:02:54.039 + source /etc/os-release 00:02:54.039 ++ NAME='Fedora Linux' 00:02:54.039 ++ VERSION='39 (Cloud Edition)' 00:02:54.039 ++ ID=fedora 00:02:54.039 ++ VERSION_ID=39 00:02:54.039 ++ VERSION_CODENAME= 00:02:54.039 ++ PLATFORM_ID=platform:f39 00:02:54.039 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:02:54.039 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:54.039 ++ LOGO=fedora-logo-icon 00:02:54.039 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:02:54.039 ++ HOME_URL=https://fedoraproject.org/ 00:02:54.039 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:02:54.039 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:54.039 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:54.039 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:54.039 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:02:54.039 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:54.039 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:02:54.039 ++ SUPPORT_END=2024-11-12 00:02:54.039 ++ VARIANT='Cloud Edition' 00:02:54.039 ++ VARIANT_ID=cloud 00:02:54.039 + uname -a 00:02:54.039 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:02:54.039 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:54.298 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:02:54.298 Hugepages 00:02:54.298 node hugesize free / total 00:02:54.298 node0 1048576kB 0 / 0 00:02:54.298 node0 2048kB 0 / 0 00:02:54.298 00:02:54.298 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:54.557 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:54.558 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:02:54.558 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:02:54.558 + rm -f /tmp/spdk-ld-path 00:02:54.558 + source autorun-spdk.conf 00:02:54.558 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:54.558 ++ SPDK_TEST_NVMF=1 00:02:54.558 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:54.558 ++ SPDK_TEST_URING=1 00:02:54.558 ++ SPDK_TEST_USDT=1 00:02:54.558 ++ SPDK_RUN_UBSAN=1 00:02:54.558 ++ NET_TYPE=virt 00:02:54.558 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:02:54.558 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:54.558 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:54.558 ++ RUN_NIGHTLY=1 00:02:54.558 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:54.558 + [[ -n '' ]] 00:02:54.558 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:54.558 + for M in /var/spdk/build-*-manifest.txt 00:02:54.558 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:02:54.558 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:54.558 + for M in /var/spdk/build-*-manifest.txt 00:02:54.558 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:54.558 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:54.558 + for M in /var/spdk/build-*-manifest.txt 00:02:54.558 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:54.558 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:54.558 ++ uname 00:02:54.558 + [[ Linux == \L\i\n\u\x ]] 00:02:54.558 + sudo dmesg -T 00:02:54.558 + sudo dmesg --clear 00:02:54.558 + dmesg_pid=5997 00:02:54.558 + [[ Fedora Linux == FreeBSD ]] 00:02:54.558 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:54.558 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:54.558 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:54.558 + sudo dmesg -Tw 00:02:54.558 + [[ -x /usr/src/fio-static/fio ]] 00:02:54.558 + export FIO_BIN=/usr/src/fio-static/fio 00:02:54.558 + FIO_BIN=/usr/src/fio-static/fio 00:02:54.558 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:54.558 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:54.558 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:54.558 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:54.558 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:54.558 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:54.558 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:54.558 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:54.558 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:54.558 Test configuration: 00:02:54.558 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:54.558 SPDK_TEST_NVMF=1 00:02:54.558 SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:54.558 SPDK_TEST_URING=1 00:02:54.558 SPDK_TEST_USDT=1 00:02:54.558 SPDK_RUN_UBSAN=1 00:02:54.558 NET_TYPE=virt 00:02:54.558 SPDK_TEST_NATIVE_DPDK=v22.11.4 00:02:54.558 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:54.558 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:54.817 RUN_NIGHTLY=1 05:55:20 -- common/autotest_common.sh@1680 -- $ [[ n == y ]] 00:02:54.817 05:55:20 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:54.817 05:55:20 -- scripts/common.sh@15 -- $ shopt -s extglob 00:02:54.817 05:55:20 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:54.817 05:55:20 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:54.817 05:55:20 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:54.817 05:55:20 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:54.817 05:55:20 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:54.817 05:55:20 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:54.817 05:55:20 -- paths/export.sh@5 -- $ export PATH 00:02:54.817 05:55:20 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:54.817 05:55:20 -- common/autobuild_common.sh@478 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:54.817 05:55:20 -- common/autobuild_common.sh@479 -- $ date +%s 00:02:54.817 05:55:20 -- common/autobuild_common.sh@479 -- $ mktemp -dt spdk_1727762120.XXXXXX 00:02:54.817 05:55:20 -- common/autobuild_common.sh@479 -- $ SPDK_WORKSPACE=/tmp/spdk_1727762120.4cR7Kt 00:02:54.817 05:55:20 -- common/autobuild_common.sh@481 -- $ [[ -n '' ]] 00:02:54.817 05:55:20 -- common/autobuild_common.sh@485 -- $ '[' -n v22.11.4 ']' 00:02:54.817 05:55:20 -- common/autobuild_common.sh@486 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:02:54.817 05:55:20 -- common/autobuild_common.sh@486 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:02:54.817 05:55:20 -- common/autobuild_common.sh@492 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:54.817 05:55:20 -- common/autobuild_common.sh@494 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:54.817 05:55:20 -- common/autobuild_common.sh@495 -- $ get_config_params 00:02:54.817 05:55:20 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:02:54.817 05:55:20 -- common/autotest_common.sh@10 -- $ set +x 00:02:54.817 05:55:20 -- common/autobuild_common.sh@495 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring --with-dpdk=/home/vagrant/spdk_repo/dpdk/build' 00:02:54.817 05:55:20 -- common/autobuild_common.sh@497 -- $ start_monitor_resources 00:02:54.817 05:55:20 -- pm/common@17 -- $ local monitor 00:02:54.817 05:55:20 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:54.817 05:55:20 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:54.817 05:55:20 -- pm/common@25 -- $ sleep 1 00:02:54.817 05:55:20 -- pm/common@21 -- $ date +%s 00:02:54.817 05:55:20 -- pm/common@21 -- $ date +%s 00:02:54.817 05:55:20 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1727762120 00:02:54.817 05:55:20 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1727762120 00:02:54.817 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1727762120_collect-vmstat.pm.log 00:02:54.817 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1727762120_collect-cpu-load.pm.log 00:02:55.757 05:55:21 -- common/autobuild_common.sh@498 -- $ trap stop_monitor_resources EXIT 00:02:55.757 05:55:21 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:55.757 05:55:21 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:55.757 05:55:21 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:55.757 05:55:21 -- spdk/autobuild.sh@16 -- $ date -u 00:02:55.757 Tue Oct 1 05:55:21 AM UTC 2024 00:02:55.757 05:55:21 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:55.757 v25.01-pre-17-g09cc66129 00:02:55.757 05:55:21 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:02:55.757 05:55:21 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:55.757 05:55:21 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:55.757 05:55:21 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:02:55.757 05:55:21 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:02:55.757 05:55:21 -- common/autotest_common.sh@10 -- $ set +x 00:02:55.757 ************************************ 00:02:55.757 START TEST ubsan 00:02:55.757 ************************************ 00:02:55.757 using ubsan 00:02:55.757 05:55:21 ubsan -- common/autotest_common.sh@1125 -- $ echo 'using ubsan' 00:02:55.757 00:02:55.757 real 0m0.000s 00:02:55.757 user 0m0.000s 00:02:55.757 sys 0m0.000s 00:02:55.757 05:55:21 ubsan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:02:55.757 ************************************ 00:02:55.757 05:55:21 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:55.757 END TEST ubsan 00:02:55.757 ************************************ 00:02:55.757 05:55:21 -- spdk/autobuild.sh@27 -- $ '[' -n v22.11.4 ']' 00:02:55.757 05:55:21 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:02:55.757 05:55:21 -- common/autobuild_common.sh@442 -- $ run_test build_native_dpdk _build_native_dpdk 00:02:55.757 05:55:21 -- common/autotest_common.sh@1101 -- $ '[' 2 -le 1 ']' 00:02:55.757 05:55:21 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:02:55.757 05:55:21 -- common/autotest_common.sh@10 -- $ set +x 00:02:55.757 ************************************ 00:02:55.757 START TEST build_native_dpdk 00:02:55.757 ************************************ 00:02:55.757 05:55:21 build_native_dpdk -- common/autotest_common.sh@1125 -- $ _build_native_dpdk 00:02:55.757 05:55:21 build_native_dpdk -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:02:55.757 05:55:21 build_native_dpdk -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:02:55.757 05:55:21 build_native_dpdk -- common/autobuild_common.sh@50 -- $ local compiler_version 00:02:55.757 05:55:21 build_native_dpdk -- common/autobuild_common.sh@51 -- $ local compiler 00:02:55.757 05:55:21 build_native_dpdk -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:02:55.757 05:55:21 build_native_dpdk -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:02:55.757 05:55:21 build_native_dpdk -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:02:55.757 05:55:21 build_native_dpdk -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:02:55.757 05:55:21 build_native_dpdk -- common/autobuild_common.sh@61 -- $ CC=gcc 00:02:55.757 05:55:21 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:02:55.757 05:55:21 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:02:55.757 05:55:21 build_native_dpdk -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:02:55.757 05:55:21 build_native_dpdk -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:02:55.757 05:55:21 build_native_dpdk -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:02:55.757 05:55:21 build_native_dpdk -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/home/vagrant/spdk_repo/dpdk/build 00:02:55.757 05:55:21 build_native_dpdk -- common/autobuild_common.sh@71 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:02:55.757 05:55:21 build_native_dpdk -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/home/vagrant/spdk_repo/dpdk 00:02:55.757 05:55:21 build_native_dpdk -- common/autobuild_common.sh@73 -- $ [[ ! -d /home/vagrant/spdk_repo/dpdk ]] 00:02:55.757 05:55:21 build_native_dpdk -- common/autobuild_common.sh@82 -- $ orgdir=/home/vagrant/spdk_repo/spdk 00:02:55.757 05:55:21 build_native_dpdk -- common/autobuild_common.sh@83 -- $ git -C /home/vagrant/spdk_repo/dpdk log --oneline -n 5 00:02:55.757 caf0f5d395 version: 22.11.4 00:02:55.757 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:02:55.757 dc9c799c7d vhost: fix missing spinlock unlock 00:02:55.757 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:02:55.757 6ef77f2a5e net/gve: fix RX buffer size alignment 00:02:55.757 05:55:21 build_native_dpdk -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:02:55.757 05:55:21 build_native_dpdk -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:02:55.757 05:55:21 build_native_dpdk -- common/autobuild_common.sh@87 -- $ dpdk_ver=22.11.4 00:02:55.757 05:55:21 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:02:55.757 05:55:21 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:02:55.757 05:55:21 build_native_dpdk -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:02:55.757 05:55:21 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:02:55.757 05:55:21 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:02:55.758 05:55:21 build_native_dpdk -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:02:55.758 05:55:21 build_native_dpdk -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:02:55.758 05:55:21 build_native_dpdk -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:02:55.758 05:55:21 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:02:55.758 05:55:21 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:02:55.758 05:55:21 build_native_dpdk -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:02:55.758 05:55:21 build_native_dpdk -- common/autobuild_common.sh@167 -- $ cd /home/vagrant/spdk_repo/dpdk 00:02:55.758 05:55:21 build_native_dpdk -- common/autobuild_common.sh@168 -- $ uname -s 00:02:55.758 05:55:21 build_native_dpdk -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:02:55.758 05:55:21 build_native_dpdk -- common/autobuild_common.sh@169 -- $ lt 22.11.4 21.11.0 00:02:55.758 05:55:21 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 22.11.4 '<' 21.11.0 00:02:55.758 05:55:21 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:02:55.758 05:55:21 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:02:55.758 05:55:21 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:02:55.758 05:55:21 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:02:55.758 05:55:21 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:02:55.758 05:55:21 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:02:55.758 05:55:21 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:02:55.758 05:55:21 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:02:55.758 05:55:21 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:02:55.758 05:55:21 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:02:55.758 05:55:21 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:02:55.758 05:55:21 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:02:55.758 05:55:21 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:02:55.758 05:55:21 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:55.758 05:55:21 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 22 00:02:55.758 05:55:21 build_native_dpdk -- scripts/common.sh@353 -- $ local d=22 00:02:55.758 05:55:21 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:02:55.758 05:55:21 build_native_dpdk -- scripts/common.sh@355 -- $ echo 22 00:02:55.758 05:55:21 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=22 00:02:55.758 05:55:21 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 21 00:02:55.758 05:55:21 build_native_dpdk -- scripts/common.sh@353 -- $ local d=21 00:02:55.758 05:55:21 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:02:55.758 05:55:21 build_native_dpdk -- scripts/common.sh@355 -- $ echo 21 00:02:55.758 05:55:21 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=21 00:02:55.758 05:55:21 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:55.758 05:55:21 build_native_dpdk -- scripts/common.sh@367 -- $ return 1 00:02:55.758 05:55:21 build_native_dpdk -- common/autobuild_common.sh@173 -- $ patch -p1 00:02:55.758 patching file config/rte_config.h 00:02:55.758 Hunk #1 succeeded at 60 (offset 1 line). 00:02:55.758 05:55:21 build_native_dpdk -- common/autobuild_common.sh@176 -- $ lt 22.11.4 24.07.0 00:02:55.758 05:55:21 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 22.11.4 '<' 24.07.0 00:02:55.758 05:55:21 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:02:55.758 05:55:21 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:02:55.758 05:55:21 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:02:55.758 05:55:21 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:02:55.758 05:55:21 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:02:55.758 05:55:21 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:02:55.758 05:55:21 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:02:55.758 05:55:21 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:02:55.758 05:55:21 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:02:55.758 05:55:21 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:02:55.758 05:55:21 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:02:55.758 05:55:21 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:02:55.758 05:55:21 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:02:55.758 05:55:21 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:55.758 05:55:21 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 22 00:02:55.758 05:55:21 build_native_dpdk -- scripts/common.sh@353 -- $ local d=22 00:02:55.758 05:55:21 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:02:55.758 05:55:21 build_native_dpdk -- scripts/common.sh@355 -- $ echo 22 00:02:55.758 05:55:21 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=22 00:02:55.758 05:55:21 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:02:55.758 05:55:21 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:02:55.758 05:55:21 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:55.758 05:55:21 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:02:56.017 05:55:21 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:02:56.018 05:55:21 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:56.018 05:55:21 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:02:56.018 05:55:21 build_native_dpdk -- scripts/common.sh@368 -- $ return 0 00:02:56.018 05:55:21 build_native_dpdk -- common/autobuild_common.sh@177 -- $ patch -p1 00:02:56.018 patching file lib/pcapng/rte_pcapng.c 00:02:56.018 Hunk #1 succeeded at 110 (offset -18 lines). 00:02:56.018 05:55:21 build_native_dpdk -- common/autobuild_common.sh@179 -- $ ge 22.11.4 24.07.0 00:02:56.018 05:55:21 build_native_dpdk -- scripts/common.sh@376 -- $ cmp_versions 22.11.4 '>=' 24.07.0 00:02:56.018 05:55:21 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:02:56.018 05:55:21 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:02:56.018 05:55:21 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:02:56.018 05:55:21 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:02:56.018 05:55:21 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:02:56.018 05:55:21 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:02:56.018 05:55:21 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=>=' 00:02:56.018 05:55:21 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:02:56.018 05:55:21 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:02:56.018 05:55:21 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:02:56.018 05:55:21 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:02:56.018 05:55:21 build_native_dpdk -- scripts/common.sh@348 -- $ : 1 00:02:56.018 05:55:21 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:02:56.018 05:55:21 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:56.018 05:55:21 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 22 00:02:56.018 05:55:21 build_native_dpdk -- scripts/common.sh@353 -- $ local d=22 00:02:56.018 05:55:21 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:02:56.018 05:55:21 build_native_dpdk -- scripts/common.sh@355 -- $ echo 22 00:02:56.018 05:55:21 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=22 00:02:56.018 05:55:21 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:02:56.018 05:55:21 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:02:56.018 05:55:21 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:56.018 05:55:21 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:02:56.018 05:55:21 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:02:56.018 05:55:21 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:56.018 05:55:21 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:02:56.018 05:55:21 build_native_dpdk -- scripts/common.sh@368 -- $ return 1 00:02:56.018 05:55:21 build_native_dpdk -- common/autobuild_common.sh@183 -- $ dpdk_kmods=false 00:02:56.018 05:55:21 build_native_dpdk -- common/autobuild_common.sh@184 -- $ uname -s 00:02:56.018 05:55:21 build_native_dpdk -- common/autobuild_common.sh@184 -- $ '[' Linux = FreeBSD ']' 00:02:56.018 05:55:21 build_native_dpdk -- common/autobuild_common.sh@188 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:02:56.018 05:55:21 build_native_dpdk -- common/autobuild_common.sh@188 -- $ meson build-tmp --prefix=/home/vagrant/spdk_repo/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:03:01.305 The Meson build system 00:03:01.305 Version: 1.5.0 00:03:01.305 Source dir: /home/vagrant/spdk_repo/dpdk 00:03:01.305 Build dir: /home/vagrant/spdk_repo/dpdk/build-tmp 00:03:01.305 Build type: native build 00:03:01.305 Program cat found: YES (/usr/bin/cat) 00:03:01.305 Project name: DPDK 00:03:01.305 Project version: 22.11.4 00:03:01.305 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:03:01.305 C linker for the host machine: gcc ld.bfd 2.40-14 00:03:01.305 Host machine cpu family: x86_64 00:03:01.305 Host machine cpu: x86_64 00:03:01.305 Message: ## Building in Developer Mode ## 00:03:01.305 Program pkg-config found: YES (/usr/bin/pkg-config) 00:03:01.305 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/check-symbols.sh) 00:03:01.305 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/options-ibverbs-static.sh) 00:03:01.305 Program objdump found: YES (/usr/bin/objdump) 00:03:01.305 Program python3 found: YES (/usr/bin/python3) 00:03:01.305 Program cat found: YES (/usr/bin/cat) 00:03:01.305 config/meson.build:83: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:03:01.305 Checking for size of "void *" : 8 00:03:01.305 Checking for size of "void *" : 8 (cached) 00:03:01.305 Library m found: YES 00:03:01.305 Library numa found: YES 00:03:01.305 Has header "numaif.h" : YES 00:03:01.305 Library fdt found: NO 00:03:01.305 Library execinfo found: NO 00:03:01.305 Has header "execinfo.h" : YES 00:03:01.305 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:03:01.305 Run-time dependency libarchive found: NO (tried pkgconfig) 00:03:01.305 Run-time dependency libbsd found: NO (tried pkgconfig) 00:03:01.305 Run-time dependency jansson found: NO (tried pkgconfig) 00:03:01.305 Run-time dependency openssl found: YES 3.1.1 00:03:01.305 Run-time dependency libpcap found: YES 1.10.4 00:03:01.305 Has header "pcap.h" with dependency libpcap: YES 00:03:01.305 Compiler for C supports arguments -Wcast-qual: YES 00:03:01.305 Compiler for C supports arguments -Wdeprecated: YES 00:03:01.305 Compiler for C supports arguments -Wformat: YES 00:03:01.305 Compiler for C supports arguments -Wformat-nonliteral: NO 00:03:01.305 Compiler for C supports arguments -Wformat-security: NO 00:03:01.305 Compiler for C supports arguments -Wmissing-declarations: YES 00:03:01.305 Compiler for C supports arguments -Wmissing-prototypes: YES 00:03:01.306 Compiler for C supports arguments -Wnested-externs: YES 00:03:01.306 Compiler for C supports arguments -Wold-style-definition: YES 00:03:01.306 Compiler for C supports arguments -Wpointer-arith: YES 00:03:01.306 Compiler for C supports arguments -Wsign-compare: YES 00:03:01.306 Compiler for C supports arguments -Wstrict-prototypes: YES 00:03:01.306 Compiler for C supports arguments -Wundef: YES 00:03:01.306 Compiler for C supports arguments -Wwrite-strings: YES 00:03:01.306 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:03:01.306 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:03:01.306 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:03:01.306 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:03:01.306 Compiler for C supports arguments -mavx512f: YES 00:03:01.306 Checking if "AVX512 checking" compiles: YES 00:03:01.306 Fetching value of define "__SSE4_2__" : 1 00:03:01.306 Fetching value of define "__AES__" : 1 00:03:01.306 Fetching value of define "__AVX__" : 1 00:03:01.306 Fetching value of define "__AVX2__" : 1 00:03:01.306 Fetching value of define "__AVX512BW__" : (undefined) 00:03:01.306 Fetching value of define "__AVX512CD__" : (undefined) 00:03:01.306 Fetching value of define "__AVX512DQ__" : (undefined) 00:03:01.306 Fetching value of define "__AVX512F__" : (undefined) 00:03:01.306 Fetching value of define "__AVX512VL__" : (undefined) 00:03:01.306 Fetching value of define "__PCLMUL__" : 1 00:03:01.306 Fetching value of define "__RDRND__" : 1 00:03:01.306 Fetching value of define "__RDSEED__" : 1 00:03:01.306 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:03:01.306 Compiler for C supports arguments -Wno-format-truncation: YES 00:03:01.306 Message: lib/kvargs: Defining dependency "kvargs" 00:03:01.306 Message: lib/telemetry: Defining dependency "telemetry" 00:03:01.306 Checking for function "getentropy" : YES 00:03:01.306 Message: lib/eal: Defining dependency "eal" 00:03:01.306 Message: lib/ring: Defining dependency "ring" 00:03:01.306 Message: lib/rcu: Defining dependency "rcu" 00:03:01.306 Message: lib/mempool: Defining dependency "mempool" 00:03:01.306 Message: lib/mbuf: Defining dependency "mbuf" 00:03:01.306 Fetching value of define "__PCLMUL__" : 1 (cached) 00:03:01.306 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:03:01.306 Compiler for C supports arguments -mpclmul: YES 00:03:01.306 Compiler for C supports arguments -maes: YES 00:03:01.306 Compiler for C supports arguments -mavx512f: YES (cached) 00:03:01.306 Compiler for C supports arguments -mavx512bw: YES 00:03:01.306 Compiler for C supports arguments -mavx512dq: YES 00:03:01.306 Compiler for C supports arguments -mavx512vl: YES 00:03:01.306 Compiler for C supports arguments -mvpclmulqdq: YES 00:03:01.306 Compiler for C supports arguments -mavx2: YES 00:03:01.306 Compiler for C supports arguments -mavx: YES 00:03:01.306 Message: lib/net: Defining dependency "net" 00:03:01.306 Message: lib/meter: Defining dependency "meter" 00:03:01.306 Message: lib/ethdev: Defining dependency "ethdev" 00:03:01.306 Message: lib/pci: Defining dependency "pci" 00:03:01.306 Message: lib/cmdline: Defining dependency "cmdline" 00:03:01.306 Message: lib/metrics: Defining dependency "metrics" 00:03:01.306 Message: lib/hash: Defining dependency "hash" 00:03:01.306 Message: lib/timer: Defining dependency "timer" 00:03:01.306 Fetching value of define "__AVX2__" : 1 (cached) 00:03:01.306 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:03:01.306 Fetching value of define "__AVX512VL__" : (undefined) (cached) 00:03:01.306 Fetching value of define "__AVX512CD__" : (undefined) (cached) 00:03:01.306 Fetching value of define "__AVX512BW__" : (undefined) (cached) 00:03:01.306 Compiler for C supports arguments -mavx512f -mavx512vl -mavx512cd -mavx512bw: YES 00:03:01.306 Message: lib/acl: Defining dependency "acl" 00:03:01.306 Message: lib/bbdev: Defining dependency "bbdev" 00:03:01.306 Message: lib/bitratestats: Defining dependency "bitratestats" 00:03:01.306 Run-time dependency libelf found: YES 0.191 00:03:01.306 Message: lib/bpf: Defining dependency "bpf" 00:03:01.306 Message: lib/cfgfile: Defining dependency "cfgfile" 00:03:01.306 Message: lib/compressdev: Defining dependency "compressdev" 00:03:01.306 Message: lib/cryptodev: Defining dependency "cryptodev" 00:03:01.306 Message: lib/distributor: Defining dependency "distributor" 00:03:01.306 Message: lib/efd: Defining dependency "efd" 00:03:01.306 Message: lib/eventdev: Defining dependency "eventdev" 00:03:01.306 Message: lib/gpudev: Defining dependency "gpudev" 00:03:01.306 Message: lib/gro: Defining dependency "gro" 00:03:01.306 Message: lib/gso: Defining dependency "gso" 00:03:01.306 Message: lib/ip_frag: Defining dependency "ip_frag" 00:03:01.306 Message: lib/jobstats: Defining dependency "jobstats" 00:03:01.306 Message: lib/latencystats: Defining dependency "latencystats" 00:03:01.306 Message: lib/lpm: Defining dependency "lpm" 00:03:01.306 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:03:01.306 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:03:01.306 Fetching value of define "__AVX512IFMA__" : (undefined) 00:03:01.306 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:03:01.306 Message: lib/member: Defining dependency "member" 00:03:01.306 Message: lib/pcapng: Defining dependency "pcapng" 00:03:01.306 Compiler for C supports arguments -Wno-cast-qual: YES 00:03:01.306 Message: lib/power: Defining dependency "power" 00:03:01.306 Message: lib/rawdev: Defining dependency "rawdev" 00:03:01.306 Message: lib/regexdev: Defining dependency "regexdev" 00:03:01.306 Message: lib/dmadev: Defining dependency "dmadev" 00:03:01.306 Message: lib/rib: Defining dependency "rib" 00:03:01.306 Message: lib/reorder: Defining dependency "reorder" 00:03:01.306 Message: lib/sched: Defining dependency "sched" 00:03:01.306 Message: lib/security: Defining dependency "security" 00:03:01.306 Message: lib/stack: Defining dependency "stack" 00:03:01.306 Has header "linux/userfaultfd.h" : YES 00:03:01.306 Message: lib/vhost: Defining dependency "vhost" 00:03:01.306 Message: lib/ipsec: Defining dependency "ipsec" 00:03:01.306 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:03:01.306 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:03:01.306 Compiler for C supports arguments -mavx512f -mavx512dq: YES 00:03:01.306 Compiler for C supports arguments -mavx512bw: YES (cached) 00:03:01.306 Message: lib/fib: Defining dependency "fib" 00:03:01.306 Message: lib/port: Defining dependency "port" 00:03:01.306 Message: lib/pdump: Defining dependency "pdump" 00:03:01.306 Message: lib/table: Defining dependency "table" 00:03:01.306 Message: lib/pipeline: Defining dependency "pipeline" 00:03:01.306 Message: lib/graph: Defining dependency "graph" 00:03:01.306 Message: lib/node: Defining dependency "node" 00:03:01.306 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:03:01.306 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:03:01.306 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:03:01.306 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:03:01.306 Compiler for C supports arguments -Wno-sign-compare: YES 00:03:01.306 Compiler for C supports arguments -Wno-unused-value: YES 00:03:01.306 Compiler for C supports arguments -Wno-format: YES 00:03:01.306 Compiler for C supports arguments -Wno-format-security: YES 00:03:01.306 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:03:02.685 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:03:02.685 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:03:02.685 Compiler for C supports arguments -Wno-unused-parameter: YES 00:03:02.685 Fetching value of define "__AVX2__" : 1 (cached) 00:03:02.685 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:03:02.685 Compiler for C supports arguments -mavx512f: YES (cached) 00:03:02.685 Compiler for C supports arguments -mavx512bw: YES (cached) 00:03:02.685 Compiler for C supports arguments -march=skylake-avx512: YES 00:03:02.685 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:03:02.685 Program doxygen found: YES (/usr/local/bin/doxygen) 00:03:02.685 Configuring doxy-api.conf using configuration 00:03:02.685 Program sphinx-build found: NO 00:03:02.685 Configuring rte_build_config.h using configuration 00:03:02.685 Message: 00:03:02.685 ================= 00:03:02.685 Applications Enabled 00:03:02.685 ================= 00:03:02.685 00:03:02.685 apps: 00:03:02.685 dumpcap, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, test-crypto-perf, 00:03:02.685 test-eventdev, test-fib, test-flow-perf, test-gpudev, test-pipeline, test-pmd, test-regex, test-sad, 00:03:02.685 test-security-perf, 00:03:02.685 00:03:02.685 Message: 00:03:02.685 ================= 00:03:02.685 Libraries Enabled 00:03:02.685 ================= 00:03:02.685 00:03:02.685 libs: 00:03:02.685 kvargs, telemetry, eal, ring, rcu, mempool, mbuf, net, 00:03:02.685 meter, ethdev, pci, cmdline, metrics, hash, timer, acl, 00:03:02.685 bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, efd, 00:03:02.685 eventdev, gpudev, gro, gso, ip_frag, jobstats, latencystats, lpm, 00:03:02.685 member, pcapng, power, rawdev, regexdev, dmadev, rib, reorder, 00:03:02.685 sched, security, stack, vhost, ipsec, fib, port, pdump, 00:03:02.685 table, pipeline, graph, node, 00:03:02.685 00:03:02.685 Message: 00:03:02.685 =============== 00:03:02.685 Drivers Enabled 00:03:02.685 =============== 00:03:02.685 00:03:02.685 common: 00:03:02.685 00:03:02.685 bus: 00:03:02.685 pci, vdev, 00:03:02.685 mempool: 00:03:02.685 ring, 00:03:02.685 dma: 00:03:02.685 00:03:02.685 net: 00:03:02.685 i40e, 00:03:02.685 raw: 00:03:02.685 00:03:02.685 crypto: 00:03:02.685 00:03:02.685 compress: 00:03:02.685 00:03:02.685 regex: 00:03:02.685 00:03:02.685 vdpa: 00:03:02.685 00:03:02.685 event: 00:03:02.685 00:03:02.685 baseband: 00:03:02.685 00:03:02.685 gpu: 00:03:02.685 00:03:02.685 00:03:02.685 Message: 00:03:02.685 ================= 00:03:02.685 Content Skipped 00:03:02.685 ================= 00:03:02.685 00:03:02.685 apps: 00:03:02.685 00:03:02.685 libs: 00:03:02.685 kni: explicitly disabled via build config (deprecated lib) 00:03:02.685 flow_classify: explicitly disabled via build config (deprecated lib) 00:03:02.685 00:03:02.685 drivers: 00:03:02.685 common/cpt: not in enabled drivers build config 00:03:02.685 common/dpaax: not in enabled drivers build config 00:03:02.685 common/iavf: not in enabled drivers build config 00:03:02.685 common/idpf: not in enabled drivers build config 00:03:02.685 common/mvep: not in enabled drivers build config 00:03:02.685 common/octeontx: not in enabled drivers build config 00:03:02.685 bus/auxiliary: not in enabled drivers build config 00:03:02.685 bus/dpaa: not in enabled drivers build config 00:03:02.685 bus/fslmc: not in enabled drivers build config 00:03:02.685 bus/ifpga: not in enabled drivers build config 00:03:02.685 bus/vmbus: not in enabled drivers build config 00:03:02.685 common/cnxk: not in enabled drivers build config 00:03:02.685 common/mlx5: not in enabled drivers build config 00:03:02.685 common/qat: not in enabled drivers build config 00:03:02.685 common/sfc_efx: not in enabled drivers build config 00:03:02.685 mempool/bucket: not in enabled drivers build config 00:03:02.685 mempool/cnxk: not in enabled drivers build config 00:03:02.685 mempool/dpaa: not in enabled drivers build config 00:03:02.686 mempool/dpaa2: not in enabled drivers build config 00:03:02.686 mempool/octeontx: not in enabled drivers build config 00:03:02.686 mempool/stack: not in enabled drivers build config 00:03:02.686 dma/cnxk: not in enabled drivers build config 00:03:02.686 dma/dpaa: not in enabled drivers build config 00:03:02.686 dma/dpaa2: not in enabled drivers build config 00:03:02.686 dma/hisilicon: not in enabled drivers build config 00:03:02.686 dma/idxd: not in enabled drivers build config 00:03:02.686 dma/ioat: not in enabled drivers build config 00:03:02.686 dma/skeleton: not in enabled drivers build config 00:03:02.686 net/af_packet: not in enabled drivers build config 00:03:02.686 net/af_xdp: not in enabled drivers build config 00:03:02.686 net/ark: not in enabled drivers build config 00:03:02.686 net/atlantic: not in enabled drivers build config 00:03:02.686 net/avp: not in enabled drivers build config 00:03:02.686 net/axgbe: not in enabled drivers build config 00:03:02.686 net/bnx2x: not in enabled drivers build config 00:03:02.686 net/bnxt: not in enabled drivers build config 00:03:02.686 net/bonding: not in enabled drivers build config 00:03:02.686 net/cnxk: not in enabled drivers build config 00:03:02.686 net/cxgbe: not in enabled drivers build config 00:03:02.686 net/dpaa: not in enabled drivers build config 00:03:02.686 net/dpaa2: not in enabled drivers build config 00:03:02.686 net/e1000: not in enabled drivers build config 00:03:02.686 net/ena: not in enabled drivers build config 00:03:02.686 net/enetc: not in enabled drivers build config 00:03:02.686 net/enetfec: not in enabled drivers build config 00:03:02.686 net/enic: not in enabled drivers build config 00:03:02.686 net/failsafe: not in enabled drivers build config 00:03:02.686 net/fm10k: not in enabled drivers build config 00:03:02.686 net/gve: not in enabled drivers build config 00:03:02.686 net/hinic: not in enabled drivers build config 00:03:02.686 net/hns3: not in enabled drivers build config 00:03:02.686 net/iavf: not in enabled drivers build config 00:03:02.686 net/ice: not in enabled drivers build config 00:03:02.686 net/idpf: not in enabled drivers build config 00:03:02.686 net/igc: not in enabled drivers build config 00:03:02.686 net/ionic: not in enabled drivers build config 00:03:02.686 net/ipn3ke: not in enabled drivers build config 00:03:02.686 net/ixgbe: not in enabled drivers build config 00:03:02.686 net/kni: not in enabled drivers build config 00:03:02.686 net/liquidio: not in enabled drivers build config 00:03:02.686 net/mana: not in enabled drivers build config 00:03:02.686 net/memif: not in enabled drivers build config 00:03:02.686 net/mlx4: not in enabled drivers build config 00:03:02.686 net/mlx5: not in enabled drivers build config 00:03:02.686 net/mvneta: not in enabled drivers build config 00:03:02.686 net/mvpp2: not in enabled drivers build config 00:03:02.686 net/netvsc: not in enabled drivers build config 00:03:02.686 net/nfb: not in enabled drivers build config 00:03:02.686 net/nfp: not in enabled drivers build config 00:03:02.686 net/ngbe: not in enabled drivers build config 00:03:02.686 net/null: not in enabled drivers build config 00:03:02.686 net/octeontx: not in enabled drivers build config 00:03:02.686 net/octeon_ep: not in enabled drivers build config 00:03:02.686 net/pcap: not in enabled drivers build config 00:03:02.686 net/pfe: not in enabled drivers build config 00:03:02.686 net/qede: not in enabled drivers build config 00:03:02.686 net/ring: not in enabled drivers build config 00:03:02.686 net/sfc: not in enabled drivers build config 00:03:02.686 net/softnic: not in enabled drivers build config 00:03:02.686 net/tap: not in enabled drivers build config 00:03:02.686 net/thunderx: not in enabled drivers build config 00:03:02.686 net/txgbe: not in enabled drivers build config 00:03:02.686 net/vdev_netvsc: not in enabled drivers build config 00:03:02.686 net/vhost: not in enabled drivers build config 00:03:02.686 net/virtio: not in enabled drivers build config 00:03:02.686 net/vmxnet3: not in enabled drivers build config 00:03:02.686 raw/cnxk_bphy: not in enabled drivers build config 00:03:02.686 raw/cnxk_gpio: not in enabled drivers build config 00:03:02.686 raw/dpaa2_cmdif: not in enabled drivers build config 00:03:02.686 raw/ifpga: not in enabled drivers build config 00:03:02.686 raw/ntb: not in enabled drivers build config 00:03:02.686 raw/skeleton: not in enabled drivers build config 00:03:02.686 crypto/armv8: not in enabled drivers build config 00:03:02.686 crypto/bcmfs: not in enabled drivers build config 00:03:02.686 crypto/caam_jr: not in enabled drivers build config 00:03:02.686 crypto/ccp: not in enabled drivers build config 00:03:02.686 crypto/cnxk: not in enabled drivers build config 00:03:02.686 crypto/dpaa_sec: not in enabled drivers build config 00:03:02.686 crypto/dpaa2_sec: not in enabled drivers build config 00:03:02.686 crypto/ipsec_mb: not in enabled drivers build config 00:03:02.686 crypto/mlx5: not in enabled drivers build config 00:03:02.686 crypto/mvsam: not in enabled drivers build config 00:03:02.686 crypto/nitrox: not in enabled drivers build config 00:03:02.686 crypto/null: not in enabled drivers build config 00:03:02.686 crypto/octeontx: not in enabled drivers build config 00:03:02.686 crypto/openssl: not in enabled drivers build config 00:03:02.686 crypto/scheduler: not in enabled drivers build config 00:03:02.686 crypto/uadk: not in enabled drivers build config 00:03:02.686 crypto/virtio: not in enabled drivers build config 00:03:02.686 compress/isal: not in enabled drivers build config 00:03:02.686 compress/mlx5: not in enabled drivers build config 00:03:02.686 compress/octeontx: not in enabled drivers build config 00:03:02.686 compress/zlib: not in enabled drivers build config 00:03:02.686 regex/mlx5: not in enabled drivers build config 00:03:02.686 regex/cn9k: not in enabled drivers build config 00:03:02.686 vdpa/ifc: not in enabled drivers build config 00:03:02.686 vdpa/mlx5: not in enabled drivers build config 00:03:02.686 vdpa/sfc: not in enabled drivers build config 00:03:02.686 event/cnxk: not in enabled drivers build config 00:03:02.686 event/dlb2: not in enabled drivers build config 00:03:02.686 event/dpaa: not in enabled drivers build config 00:03:02.686 event/dpaa2: not in enabled drivers build config 00:03:02.686 event/dsw: not in enabled drivers build config 00:03:02.686 event/opdl: not in enabled drivers build config 00:03:02.686 event/skeleton: not in enabled drivers build config 00:03:02.686 event/sw: not in enabled drivers build config 00:03:02.686 event/octeontx: not in enabled drivers build config 00:03:02.686 baseband/acc: not in enabled drivers build config 00:03:02.686 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:03:02.686 baseband/fpga_lte_fec: not in enabled drivers build config 00:03:02.686 baseband/la12xx: not in enabled drivers build config 00:03:02.686 baseband/null: not in enabled drivers build config 00:03:02.686 baseband/turbo_sw: not in enabled drivers build config 00:03:02.686 gpu/cuda: not in enabled drivers build config 00:03:02.686 00:03:02.686 00:03:02.686 Build targets in project: 314 00:03:02.686 00:03:02.686 DPDK 22.11.4 00:03:02.686 00:03:02.686 User defined options 00:03:02.686 libdir : lib 00:03:02.686 prefix : /home/vagrant/spdk_repo/dpdk/build 00:03:02.686 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:03:02.686 c_link_args : 00:03:02.686 enable_docs : false 00:03:02.686 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:03:02.686 enable_kmods : false 00:03:02.686 machine : native 00:03:02.686 tests : false 00:03:02.686 00:03:02.686 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:02.686 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:03:02.946 05:55:28 build_native_dpdk -- common/autobuild_common.sh@192 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 00:03:02.946 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:03:02.946 [1/743] Generating lib/rte_telemetry_def with a custom command 00:03:02.946 [2/743] Generating lib/rte_telemetry_mingw with a custom command 00:03:02.946 [3/743] Generating lib/rte_kvargs_mingw with a custom command 00:03:02.946 [4/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:03:02.946 [5/743] Generating lib/rte_kvargs_def with a custom command 00:03:02.946 [6/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:03:02.946 [7/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:03:03.205 [8/743] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:03:03.205 [9/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:03:03.205 [10/743] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:03:03.205 [11/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:03:03.205 [12/743] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:03:03.205 [13/743] Linking static target lib/librte_kvargs.a 00:03:03.205 [14/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:03:03.205 [15/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:03:03.205 [16/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:03:03.205 [17/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:03:03.205 [18/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:03:03.463 [19/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_log.c.o 00:03:03.463 [20/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:03:03.463 [21/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:03:03.463 [22/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:03:03.463 [23/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:03:03.463 [24/743] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:03:03.463 [25/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:03:03.463 [26/743] Linking target lib/librte_kvargs.so.23.0 00:03:03.463 [27/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:03:03.463 [28/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:03:03.463 [29/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:03:03.463 [30/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:03:03.720 [31/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:03:03.720 [32/743] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:03:03.720 [33/743] Linking static target lib/librte_telemetry.a 00:03:03.720 [34/743] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:03:03.720 [35/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:03:03.720 [36/743] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:03:03.720 [37/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:03:03.720 [38/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:03:03.720 [39/743] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:03:03.720 [40/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:03:03.978 [41/743] Generating symbol file lib/librte_kvargs.so.23.0.p/librte_kvargs.so.23.0.symbols 00:03:03.978 [42/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:03:03.978 [43/743] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:03:03.978 [44/743] Linking target lib/librte_telemetry.so.23.0 00:03:03.978 [45/743] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:03:03.978 [46/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:03:03.978 [47/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:03:03.978 [48/743] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:03:04.237 [49/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:03:04.237 [50/743] Generating symbol file lib/librte_telemetry.so.23.0.p/librte_telemetry.so.23.0.symbols 00:03:04.237 [51/743] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:03:04.237 [52/743] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:03:04.237 [53/743] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:03:04.237 [54/743] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:03:04.237 [55/743] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:03:04.237 [56/743] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:03:04.237 [57/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:03:04.237 [58/743] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:03:04.237 [59/743] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:03:04.237 [60/743] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:03:04.237 [61/743] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:03:04.237 [62/743] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:03:04.237 [63/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:03:04.495 [64/743] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:03:04.495 [65/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_log.c.o 00:03:04.495 [66/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:03:04.495 [67/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:03:04.495 [68/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:03:04.495 [69/743] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:03:04.495 [70/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:03:04.495 [71/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:03:04.495 [72/743] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:03:04.495 [73/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:03:04.495 [74/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:03:04.495 [75/743] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:03:04.495 [76/743] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:03:04.753 [77/743] Generating lib/rte_eal_def with a custom command 00:03:04.753 [78/743] Generating lib/rte_eal_mingw with a custom command 00:03:04.753 [79/743] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:03:04.753 [80/743] Generating lib/rte_ring_def with a custom command 00:03:04.753 [81/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:03:04.753 [82/743] Generating lib/rte_ring_mingw with a custom command 00:03:04.753 [83/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:03:04.753 [84/743] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:03:04.753 [85/743] Generating lib/rte_rcu_def with a custom command 00:03:04.753 [86/743] Generating lib/rte_rcu_mingw with a custom command 00:03:04.753 [87/743] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:03:04.753 [88/743] Linking static target lib/librte_ring.a 00:03:04.753 [89/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:03:04.753 [90/743] Generating lib/rte_mempool_def with a custom command 00:03:05.011 [91/743] Generating lib/rte_mempool_mingw with a custom command 00:03:05.011 [92/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:03:05.011 [93/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:03:05.011 [94/743] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:03:05.269 [95/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:03:05.269 [96/743] Linking static target lib/librte_eal.a 00:03:05.269 [97/743] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:03:05.527 [98/743] Generating lib/rte_mbuf_def with a custom command 00:03:05.527 [99/743] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:03:05.527 [100/743] Generating lib/rte_mbuf_mingw with a custom command 00:03:05.527 [101/743] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:03:05.527 [102/743] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:03:05.527 [103/743] Linking static target lib/librte_rcu.a 00:03:05.527 [104/743] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:03:05.527 [105/743] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:03:05.785 [106/743] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:03:05.785 [107/743] Linking static target lib/librte_mempool.a 00:03:05.785 [108/743] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:03:05.785 [109/743] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:03:05.785 [110/743] Generating lib/rte_net_def with a custom command 00:03:06.044 [111/743] Generating lib/rte_net_mingw with a custom command 00:03:06.044 [112/743] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:03:06.044 [113/743] Linking static target lib/net/libnet_crc_avx512_lib.a 00:03:06.044 [114/743] Generating lib/rte_meter_def with a custom command 00:03:06.044 [115/743] Generating lib/rte_meter_mingw with a custom command 00:03:06.044 [116/743] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:03:06.044 [117/743] Linking static target lib/librte_meter.a 00:03:06.044 [118/743] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:03:06.303 [119/743] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:03:06.303 [120/743] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:03:06.303 [121/743] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:03:06.303 [122/743] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:03:06.561 [123/743] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:03:06.561 [124/743] Linking static target lib/librte_mbuf.a 00:03:06.561 [125/743] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:03:06.561 [126/743] Linking static target lib/librte_net.a 00:03:06.561 [127/743] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:03:06.819 [128/743] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:03:06.819 [129/743] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:03:06.819 [130/743] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:03:06.819 [131/743] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:03:07.078 [132/743] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:03:07.078 [133/743] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:03:07.078 [134/743] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:03:07.336 [135/743] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:03:07.594 [136/743] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:03:07.594 [137/743] Generating lib/rte_ethdev_def with a custom command 00:03:07.594 [138/743] Generating lib/rte_ethdev_mingw with a custom command 00:03:07.594 [139/743] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:03:07.594 [140/743] Generating lib/rte_pci_def with a custom command 00:03:07.594 [141/743] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:03:07.594 [142/743] Linking static target lib/librte_pci.a 00:03:07.853 [143/743] Generating lib/rte_pci_mingw with a custom command 00:03:07.853 [144/743] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:03:07.853 [145/743] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:03:07.853 [146/743] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:03:07.853 [147/743] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:03:07.853 [148/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:03:07.853 [149/743] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:07.853 [150/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:03:07.853 [151/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:03:08.112 [152/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:03:08.112 [153/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:03:08.112 [154/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:03:08.112 [155/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:03:08.112 [156/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:03:08.112 [157/743] Generating lib/rte_cmdline_def with a custom command 00:03:08.112 [158/743] Generating lib/rte_cmdline_mingw with a custom command 00:03:08.112 [159/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:03:08.112 [160/743] Generating lib/rte_metrics_def with a custom command 00:03:08.112 [161/743] Generating lib/rte_metrics_mingw with a custom command 00:03:08.112 [162/743] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:03:08.371 [163/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:03:08.371 [164/743] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:03:08.371 [165/743] Generating lib/rte_hash_def with a custom command 00:03:08.371 [166/743] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:03:08.371 [167/743] Generating lib/rte_hash_mingw with a custom command 00:03:08.371 [168/743] Generating lib/rte_timer_def with a custom command 00:03:08.371 [169/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:03:08.371 [170/743] Generating lib/rte_timer_mingw with a custom command 00:03:08.371 [171/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:03:08.371 [172/743] Linking static target lib/librte_cmdline.a 00:03:08.371 [173/743] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:03:08.938 [174/743] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:03:08.938 [175/743] Linking static target lib/librte_metrics.a 00:03:08.938 [176/743] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:03:08.938 [177/743] Linking static target lib/librte_timer.a 00:03:09.198 [178/743] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:03:09.198 [179/743] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:03:09.198 [180/743] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:03:09.198 [181/743] Linking static target lib/librte_ethdev.a 00:03:09.457 [182/743] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:03:09.457 [183/743] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:03:09.457 [184/743] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:09.770 [185/743] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:03:10.057 [186/743] Generating lib/rte_acl_def with a custom command 00:03:10.057 [187/743] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:03:10.057 [188/743] Generating lib/rte_acl_mingw with a custom command 00:03:10.057 [189/743] Generating lib/rte_bbdev_def with a custom command 00:03:10.057 [190/743] Generating lib/rte_bbdev_mingw with a custom command 00:03:10.057 [191/743] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:03:10.057 [192/743] Generating lib/rte_bitratestats_def with a custom command 00:03:10.057 [193/743] Generating lib/rte_bitratestats_mingw with a custom command 00:03:10.315 [194/743] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:03:10.572 [195/743] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:03:10.572 [196/743] Linking static target lib/librte_bitratestats.a 00:03:10.829 [197/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:03:10.829 [198/743] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:03:10.829 [199/743] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:03:10.829 [200/743] Linking static target lib/librte_bbdev.a 00:03:11.087 [201/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:03:11.344 [202/743] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:03:11.344 [203/743] Linking static target lib/librte_hash.a 00:03:11.344 [204/743] Compiling C object lib/acl/libavx512_tmp.a.p/acl_run_avx512.c.o 00:03:11.344 [205/743] Linking static target lib/acl/libavx512_tmp.a 00:03:11.344 [206/743] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:03:11.602 [207/743] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:11.602 [208/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:03:11.602 [209/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:03:11.860 [210/743] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:03:11.860 [211/743] Generating lib/rte_bpf_def with a custom command 00:03:12.116 [212/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:03:12.117 [213/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:03:12.117 [214/743] Generating lib/rte_bpf_mingw with a custom command 00:03:12.117 [215/743] Generating lib/rte_cfgfile_def with a custom command 00:03:12.117 [216/743] Generating lib/rte_cfgfile_mingw with a custom command 00:03:12.117 [217/743] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx2.c.o 00:03:12.117 [218/743] Linking static target lib/librte_acl.a 00:03:12.117 [219/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:03:12.375 [220/743] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:03:12.375 [221/743] Linking static target lib/librte_cfgfile.a 00:03:12.375 [222/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:03:12.375 [223/743] Generating lib/rte_compressdev_def with a custom command 00:03:12.375 [224/743] Generating lib/rte_compressdev_mingw with a custom command 00:03:12.375 [225/743] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:03:12.633 [226/743] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:03:12.633 [227/743] Linking target lib/librte_eal.so.23.0 00:03:12.633 [228/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:03:12.633 [229/743] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:03:12.633 [230/743] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:03:12.634 [231/743] Generating lib/rte_cryptodev_def with a custom command 00:03:12.634 [232/743] Generating lib/rte_cryptodev_mingw with a custom command 00:03:12.634 [233/743] Generating symbol file lib/librte_eal.so.23.0.p/librte_eal.so.23.0.symbols 00:03:12.892 [234/743] Linking target lib/librte_ring.so.23.0 00:03:12.892 [235/743] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:03:12.892 [236/743] Linking target lib/librte_meter.so.23.0 00:03:12.892 [237/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:03:12.892 [238/743] Generating symbol file lib/librte_ring.so.23.0.p/librte_ring.so.23.0.symbols 00:03:12.892 [239/743] Linking target lib/librte_pci.so.23.0 00:03:12.892 [240/743] Linking target lib/librte_rcu.so.23.0 00:03:12.892 [241/743] Linking target lib/librte_mempool.so.23.0 00:03:12.893 [242/743] Generating symbol file lib/librte_meter.so.23.0.p/librte_meter.so.23.0.symbols 00:03:12.893 [243/743] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:03:13.152 [244/743] Linking target lib/librte_timer.so.23.0 00:03:13.152 [245/743] Generating symbol file lib/librte_rcu.so.23.0.p/librte_rcu.so.23.0.symbols 00:03:13.152 [246/743] Generating symbol file lib/librte_mempool.so.23.0.p/librte_mempool.so.23.0.symbols 00:03:13.152 [247/743] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:03:13.152 [248/743] Linking static target lib/librte_bpf.a 00:03:13.152 [249/743] Linking target lib/librte_acl.so.23.0 00:03:13.152 [250/743] Generating symbol file lib/librte_pci.so.23.0.p/librte_pci.so.23.0.symbols 00:03:13.152 [251/743] Linking target lib/librte_mbuf.so.23.0 00:03:13.152 [252/743] Linking target lib/librte_cfgfile.so.23.0 00:03:13.152 [253/743] Linking static target lib/librte_compressdev.a 00:03:13.152 [254/743] Generating symbol file lib/librte_timer.so.23.0.p/librte_timer.so.23.0.symbols 00:03:13.152 [255/743] Generating symbol file lib/librte_acl.so.23.0.p/librte_acl.so.23.0.symbols 00:03:13.152 [256/743] Generating symbol file lib/librte_mbuf.so.23.0.p/librte_mbuf.so.23.0.symbols 00:03:13.152 [257/743] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:03:13.411 [258/743] Generating lib/rte_distributor_def with a custom command 00:03:13.411 [259/743] Generating lib/rte_distributor_mingw with a custom command 00:03:13.411 [260/743] Linking target lib/librte_net.so.23.0 00:03:13.411 [261/743] Linking target lib/librte_bbdev.so.23.0 00:03:13.411 [262/743] Generating lib/rte_efd_def with a custom command 00:03:13.411 [263/743] Generating lib/rte_efd_mingw with a custom command 00:03:13.411 [264/743] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:03:13.411 [265/743] Generating symbol file lib/librte_net.so.23.0.p/librte_net.so.23.0.symbols 00:03:13.411 [266/743] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:03:13.411 [267/743] Linking target lib/librte_cmdline.so.23.0 00:03:13.411 [268/743] Linking target lib/librte_hash.so.23.0 00:03:13.668 [269/743] Generating symbol file lib/librte_hash.so.23.0.p/librte_hash.so.23.0.symbols 00:03:13.668 [270/743] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:03:13.668 [271/743] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:03:13.668 [272/743] Linking static target lib/librte_distributor.a 00:03:13.926 [273/743] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:13.926 [274/743] Linking target lib/librte_ethdev.so.23.0 00:03:13.926 [275/743] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:03:13.926 [276/743] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:03:14.185 [277/743] Linking target lib/librte_distributor.so.23.0 00:03:14.185 [278/743] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:14.185 [279/743] Linking target lib/librte_compressdev.so.23.0 00:03:14.185 [280/743] Generating symbol file lib/librte_ethdev.so.23.0.p/librte_ethdev.so.23.0.symbols 00:03:14.185 [281/743] Linking target lib/librte_metrics.so.23.0 00:03:14.185 [282/743] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:03:14.185 [283/743] Linking target lib/librte_bpf.so.23.0 00:03:14.185 [284/743] Generating symbol file lib/librte_metrics.so.23.0.p/librte_metrics.so.23.0.symbols 00:03:14.444 [285/743] Linking target lib/librte_bitratestats.so.23.0 00:03:14.444 [286/743] Generating symbol file lib/librte_bpf.so.23.0.p/librte_bpf.so.23.0.symbols 00:03:14.444 [287/743] Generating lib/rte_eventdev_def with a custom command 00:03:14.444 [288/743] Generating lib/rte_eventdev_mingw with a custom command 00:03:14.444 [289/743] Generating lib/rte_gpudev_def with a custom command 00:03:14.444 [290/743] Generating lib/rte_gpudev_mingw with a custom command 00:03:14.702 [291/743] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:03:14.961 [292/743] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:03:14.961 [293/743] Linking static target lib/librte_efd.a 00:03:14.961 [294/743] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:03:15.219 [295/743] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:03:15.219 [296/743] Linking static target lib/librte_cryptodev.a 00:03:15.219 [297/743] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:03:15.219 [298/743] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:03:15.219 [299/743] Linking target lib/librte_efd.so.23.0 00:03:15.219 [300/743] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:03:15.219 [301/743] Generating lib/rte_gro_def with a custom command 00:03:15.219 [302/743] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:03:15.219 [303/743] Linking static target lib/librte_gpudev.a 00:03:15.219 [304/743] Generating lib/rte_gro_mingw with a custom command 00:03:15.478 [305/743] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:03:15.478 [306/743] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:03:15.478 [307/743] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:03:15.737 [308/743] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:03:15.995 [309/743] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:03:15.995 [310/743] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:03:15.995 [311/743] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:03:15.995 [312/743] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:03:15.995 [313/743] Linking static target lib/librte_gro.a 00:03:15.995 [314/743] Generating lib/rte_gso_mingw with a custom command 00:03:15.995 [315/743] Generating lib/rte_gso_def with a custom command 00:03:16.254 [316/743] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:16.254 [317/743] Linking target lib/librte_gpudev.so.23.0 00:03:16.254 [318/743] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:03:16.254 [319/743] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:03:16.254 [320/743] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:03:16.254 [321/743] Linking target lib/librte_gro.so.23.0 00:03:16.512 [322/743] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:03:16.512 [323/743] Generating lib/rte_ip_frag_def with a custom command 00:03:16.512 [324/743] Generating lib/rte_ip_frag_mingw with a custom command 00:03:16.512 [325/743] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:03:16.512 [326/743] Linking static target lib/librte_eventdev.a 00:03:16.770 [327/743] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:03:16.770 [328/743] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:03:16.770 [329/743] Linking static target lib/librte_jobstats.a 00:03:16.770 [330/743] Linking static target lib/librte_gso.a 00:03:16.770 [331/743] Generating lib/rte_jobstats_def with a custom command 00:03:16.770 [332/743] Generating lib/rte_jobstats_mingw with a custom command 00:03:16.770 [333/743] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:03:16.770 [334/743] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:03:16.770 [335/743] Linking target lib/librte_gso.so.23.0 00:03:17.029 [336/743] Generating lib/rte_latencystats_def with a custom command 00:03:17.029 [337/743] Generating lib/rte_latencystats_mingw with a custom command 00:03:17.029 [338/743] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:03:17.029 [339/743] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:03:17.029 [340/743] Generating lib/rte_lpm_def with a custom command 00:03:17.029 [341/743] Generating lib/rte_lpm_mingw with a custom command 00:03:17.029 [342/743] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:03:17.029 [343/743] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:03:17.029 [344/743] Linking target lib/librte_jobstats.so.23.0 00:03:17.029 [345/743] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:03:17.287 [346/743] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:03:17.287 [347/743] Linking static target lib/librte_ip_frag.a 00:03:17.287 [348/743] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:17.287 [349/743] Linking target lib/librte_cryptodev.so.23.0 00:03:17.545 [350/743] Generating symbol file lib/librte_cryptodev.so.23.0.p/librte_cryptodev.so.23.0.symbols 00:03:17.545 [351/743] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:03:17.545 [352/743] Linking target lib/librte_ip_frag.so.23.0 00:03:17.545 [353/743] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:03:17.545 [354/743] Linking static target lib/librte_latencystats.a 00:03:17.803 [355/743] Generating symbol file lib/librte_ip_frag.so.23.0.p/librte_ip_frag.so.23.0.symbols 00:03:17.803 [356/743] Generating lib/rte_member_def with a custom command 00:03:17.803 [357/743] Generating lib/rte_member_mingw with a custom command 00:03:17.803 [358/743] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:03:17.803 [359/743] Linking static target lib/member/libsketch_avx512_tmp.a 00:03:17.803 [360/743] Generating lib/rte_pcapng_def with a custom command 00:03:17.803 [361/743] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:03:17.803 [362/743] Generating lib/rte_pcapng_mingw with a custom command 00:03:17.803 [363/743] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:03:17.803 [364/743] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:03:17.803 [365/743] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:03:17.803 [366/743] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:03:17.803 [367/743] Linking target lib/librte_latencystats.so.23.0 00:03:18.061 [368/743] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:03:18.062 [369/743] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:03:18.062 [370/743] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:03:18.320 [371/743] Compiling C object lib/librte_power.a.p/power_rte_power_empty_poll.c.o 00:03:18.320 [372/743] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:03:18.320 [373/743] Linking static target lib/librte_lpm.a 00:03:18.320 [374/743] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:03:18.320 [375/743] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:18.320 [376/743] Generating lib/rte_power_def with a custom command 00:03:18.320 [377/743] Generating lib/rte_power_mingw with a custom command 00:03:18.320 [378/743] Linking target lib/librte_eventdev.so.23.0 00:03:18.578 [379/743] Generating symbol file lib/librte_eventdev.so.23.0.p/librte_eventdev.so.23.0.symbols 00:03:18.578 [380/743] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:03:18.578 [381/743] Generating lib/rte_rawdev_def with a custom command 00:03:18.578 [382/743] Generating lib/rte_rawdev_mingw with a custom command 00:03:18.578 [383/743] Generating lib/rte_regexdev_def with a custom command 00:03:18.578 [384/743] Generating lib/rte_regexdev_mingw with a custom command 00:03:18.578 [385/743] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:03:18.578 [386/743] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:03:18.837 [387/743] Generating lib/rte_dmadev_def with a custom command 00:03:18.837 [388/743] Linking target lib/librte_lpm.so.23.0 00:03:18.837 [389/743] Generating lib/rte_dmadev_mingw with a custom command 00:03:18.837 [390/743] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:03:18.837 [391/743] Linking static target lib/librte_pcapng.a 00:03:18.837 [392/743] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:03:18.837 [393/743] Linking static target lib/librte_rawdev.a 00:03:18.837 [394/743] Compiling C object lib/librte_power.a.p/power_rte_power_intel_uncore.c.o 00:03:18.837 [395/743] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:03:18.837 [396/743] Generating symbol file lib/librte_lpm.so.23.0.p/librte_lpm.so.23.0.symbols 00:03:18.837 [397/743] Generating lib/rte_rib_def with a custom command 00:03:18.837 [398/743] Generating lib/rte_rib_mingw with a custom command 00:03:18.837 [399/743] Generating lib/rte_reorder_def with a custom command 00:03:19.095 [400/743] Generating lib/rte_reorder_mingw with a custom command 00:03:19.095 [401/743] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:03:19.095 [402/743] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:03:19.095 [403/743] Linking static target lib/librte_dmadev.a 00:03:19.095 [404/743] Linking target lib/librte_pcapng.so.23.0 00:03:19.095 [405/743] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:03:19.095 [406/743] Linking static target lib/librte_power.a 00:03:19.354 [407/743] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:19.354 [408/743] Generating symbol file lib/librte_pcapng.so.23.0.p/librte_pcapng.so.23.0.symbols 00:03:19.354 [409/743] Linking target lib/librte_rawdev.so.23.0 00:03:19.354 [410/743] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:03:19.354 [411/743] Linking static target lib/librte_member.a 00:03:19.354 [412/743] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:03:19.354 [413/743] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:03:19.354 [414/743] Generating lib/rte_sched_def with a custom command 00:03:19.612 [415/743] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:03:19.612 [416/743] Generating lib/rte_sched_mingw with a custom command 00:03:19.612 [417/743] Generating lib/rte_security_def with a custom command 00:03:19.612 [418/743] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:03:19.612 [419/743] Generating lib/rte_security_mingw with a custom command 00:03:19.612 [420/743] Linking static target lib/librte_regexdev.a 00:03:19.612 [421/743] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:03:19.612 [422/743] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:19.612 [423/743] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:03:19.612 [424/743] Linking target lib/librte_dmadev.so.23.0 00:03:19.612 [425/743] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:03:19.612 [426/743] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:03:19.612 [427/743] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:03:19.870 [428/743] Linking static target lib/librte_reorder.a 00:03:19.870 [429/743] Linking target lib/librte_member.so.23.0 00:03:19.870 [430/743] Generating lib/rte_stack_def with a custom command 00:03:19.870 [431/743] Generating lib/rte_stack_mingw with a custom command 00:03:19.870 [432/743] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:03:19.870 [433/743] Linking static target lib/librte_stack.a 00:03:19.870 [434/743] Generating symbol file lib/librte_dmadev.so.23.0.p/librte_dmadev.so.23.0.symbols 00:03:19.870 [435/743] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:03:19.870 [436/743] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:03:20.129 [437/743] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:03:20.129 [438/743] Linking target lib/librte_reorder.so.23.0 00:03:20.129 [439/743] Linking target lib/librte_stack.so.23.0 00:03:20.129 [440/743] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:03:20.129 [441/743] Linking static target lib/librte_rib.a 00:03:20.129 [442/743] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:03:20.129 [443/743] Linking target lib/librte_power.so.23.0 00:03:20.129 [444/743] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:20.387 [445/743] Linking target lib/librte_regexdev.so.23.0 00:03:20.387 [446/743] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:03:20.387 [447/743] Linking static target lib/librte_security.a 00:03:20.647 [448/743] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:03:20.647 [449/743] Linking target lib/librte_rib.so.23.0 00:03:20.647 [450/743] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:03:20.647 [451/743] Generating lib/rte_vhost_def with a custom command 00:03:20.647 [452/743] Generating lib/rte_vhost_mingw with a custom command 00:03:20.647 [453/743] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:03:20.647 [454/743] Generating symbol file lib/librte_rib.so.23.0.p/librte_rib.so.23.0.symbols 00:03:20.906 [455/743] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:03:20.906 [456/743] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:03:20.906 [457/743] Linking target lib/librte_security.so.23.0 00:03:20.906 [458/743] Generating symbol file lib/librte_security.so.23.0.p/librte_security.so.23.0.symbols 00:03:21.165 [459/743] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:03:21.165 [460/743] Linking static target lib/librte_sched.a 00:03:21.423 [461/743] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:03:21.423 [462/743] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:03:21.423 [463/743] Linking target lib/librte_sched.so.23.0 00:03:21.423 [464/743] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:03:21.424 [465/743] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:03:21.682 [466/743] Generating lib/rte_ipsec_def with a custom command 00:03:21.682 [467/743] Generating lib/rte_ipsec_mingw with a custom command 00:03:21.682 [468/743] Generating symbol file lib/librte_sched.so.23.0.p/librte_sched.so.23.0.symbols 00:03:21.682 [469/743] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:03:21.682 [470/743] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:03:21.682 [471/743] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:03:22.250 [472/743] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:03:22.250 [473/743] Generating lib/rte_fib_def with a custom command 00:03:22.250 [474/743] Generating lib/rte_fib_mingw with a custom command 00:03:22.250 [475/743] Compiling C object lib/fib/libdir24_8_avx512_tmp.a.p/dir24_8_avx512.c.o 00:03:22.250 [476/743] Linking static target lib/fib/libdir24_8_avx512_tmp.a 00:03:22.250 [477/743] Compiling C object lib/fib/libtrie_avx512_tmp.a.p/trie_avx512.c.o 00:03:22.250 [478/743] Linking static target lib/fib/libtrie_avx512_tmp.a 00:03:22.509 [479/743] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:03:22.509 [480/743] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:03:22.509 [481/743] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:03:22.509 [482/743] Linking static target lib/librte_ipsec.a 00:03:22.768 [483/743] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:03:23.027 [484/743] Linking target lib/librte_ipsec.so.23.0 00:03:23.027 [485/743] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:03:23.027 [486/743] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:03:23.027 [487/743] Linking static target lib/librte_fib.a 00:03:23.027 [488/743] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:03:23.286 [489/743] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:03:23.286 [490/743] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:03:23.286 [491/743] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:03:23.555 [492/743] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:03:23.555 [493/743] Linking target lib/librte_fib.so.23.0 00:03:23.555 [494/743] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:03:24.142 [495/743] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:03:24.142 [496/743] Generating lib/rte_port_def with a custom command 00:03:24.142 [497/743] Generating lib/rte_port_mingw with a custom command 00:03:24.142 [498/743] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:03:24.142 [499/743] Generating lib/rte_pdump_def with a custom command 00:03:24.142 [500/743] Generating lib/rte_pdump_mingw with a custom command 00:03:24.142 [501/743] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:03:24.142 [502/743] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:03:24.448 [503/743] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:03:24.448 [504/743] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:03:24.448 [505/743] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:03:24.448 [506/743] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:03:24.719 [507/743] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:03:24.719 [508/743] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:03:24.719 [509/743] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:03:24.719 [510/743] Linking static target lib/librte_port.a 00:03:24.978 [511/743] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:03:24.978 [512/743] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:03:25.237 [513/743] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:03:25.237 [514/743] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:03:25.237 [515/743] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:03:25.237 [516/743] Linking target lib/librte_port.so.23.0 00:03:25.237 [517/743] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:03:25.496 [518/743] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:03:25.496 [519/743] Linking static target lib/librte_pdump.a 00:03:25.496 [520/743] Generating symbol file lib/librte_port.so.23.0.p/librte_port.so.23.0.symbols 00:03:25.755 [521/743] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:03:25.755 [522/743] Linking target lib/librte_pdump.so.23.0 00:03:26.013 [523/743] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:03:26.013 [524/743] Generating lib/rte_table_def with a custom command 00:03:26.013 [525/743] Generating lib/rte_table_mingw with a custom command 00:03:26.013 [526/743] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:03:26.281 [527/743] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:03:26.281 [528/743] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:03:26.281 [529/743] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:03:26.281 [530/743] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:03:26.281 [531/743] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:03:26.281 [532/743] Generating lib/rte_pipeline_def with a custom command 00:03:26.281 [533/743] Generating lib/rte_pipeline_mingw with a custom command 00:03:26.540 [534/743] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:03:26.540 [535/743] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:03:26.540 [536/743] Linking static target lib/librte_table.a 00:03:26.799 [537/743] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:03:27.059 [538/743] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:03:27.059 [539/743] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:03:27.318 [540/743] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:03:27.318 [541/743] Linking target lib/librte_table.so.23.0 00:03:27.318 [542/743] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:03:27.318 [543/743] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:03:27.318 [544/743] Generating lib/rte_graph_def with a custom command 00:03:27.318 [545/743] Generating symbol file lib/librte_table.so.23.0.p/librte_table.so.23.0.symbols 00:03:27.318 [546/743] Generating lib/rte_graph_mingw with a custom command 00:03:27.577 [547/743] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:03:27.577 [548/743] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:03:27.836 [549/743] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:03:28.096 [550/743] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:03:28.096 [551/743] Linking static target lib/librte_graph.a 00:03:28.096 [552/743] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:03:28.096 [553/743] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:03:28.355 [554/743] Compiling C object lib/librte_node.a.p/node_null.c.o 00:03:28.355 [555/743] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:03:28.615 [556/743] Compiling C object lib/librte_node.a.p/node_log.c.o 00:03:28.615 [557/743] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:03:28.615 [558/743] Generating lib/rte_node_def with a custom command 00:03:28.615 [559/743] Generating lib/rte_node_mingw with a custom command 00:03:28.874 [560/743] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:03:28.874 [561/743] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:03:28.874 [562/743] Linking target lib/librte_graph.so.23.0 00:03:28.874 [563/743] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:03:28.874 [564/743] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:03:28.874 [565/743] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:03:28.874 [566/743] Generating symbol file lib/librte_graph.so.23.0.p/librte_graph.so.23.0.symbols 00:03:29.133 [567/743] Generating drivers/rte_bus_pci_def with a custom command 00:03:29.133 [568/743] Generating drivers/rte_bus_pci_mingw with a custom command 00:03:29.133 [569/743] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:03:29.133 [570/743] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:03:29.133 [571/743] Generating drivers/rte_bus_vdev_def with a custom command 00:03:29.133 [572/743] Generating drivers/rte_bus_vdev_mingw with a custom command 00:03:29.133 [573/743] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:03:29.133 [574/743] Generating drivers/rte_mempool_ring_def with a custom command 00:03:29.133 [575/743] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:03:29.133 [576/743] Generating drivers/rte_mempool_ring_mingw with a custom command 00:03:29.392 [577/743] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:03:29.392 [578/743] Linking static target lib/librte_node.a 00:03:29.392 [579/743] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:03:29.392 [580/743] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:03:29.392 [581/743] Linking static target drivers/libtmp_rte_bus_vdev.a 00:03:29.392 [582/743] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:03:29.650 [583/743] Linking target lib/librte_node.so.23.0 00:03:29.650 [584/743] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:03:29.650 [585/743] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:29.650 [586/743] Linking static target drivers/librte_bus_vdev.a 00:03:29.650 [587/743] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:03:29.650 [588/743] Linking static target drivers/libtmp_rte_bus_pci.a 00:03:29.909 [589/743] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:29.909 [590/743] Compiling C object drivers/librte_bus_vdev.so.23.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:29.909 [591/743] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:03:29.909 [592/743] Linking target drivers/librte_bus_vdev.so.23.0 00:03:29.909 [593/743] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:29.909 [594/743] Compiling C object drivers/librte_bus_pci.so.23.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:29.909 [595/743] Linking static target drivers/librte_bus_pci.a 00:03:29.909 [596/743] Generating symbol file drivers/librte_bus_vdev.so.23.0.p/librte_bus_vdev.so.23.0.symbols 00:03:30.169 [597/743] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:03:30.169 [598/743] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:03:30.169 [599/743] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:30.169 [600/743] Linking target drivers/librte_bus_pci.so.23.0 00:03:30.428 [601/743] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:03:30.428 [602/743] Generating symbol file drivers/librte_bus_pci.so.23.0.p/librte_bus_pci.so.23.0.symbols 00:03:30.429 [603/743] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:03:30.429 [604/743] Linking static target drivers/libtmp_rte_mempool_ring.a 00:03:30.687 [605/743] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:03:30.687 [606/743] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:03:30.687 [607/743] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:30.687 [608/743] Compiling C object drivers/librte_mempool_ring.so.23.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:30.687 [609/743] Linking static target drivers/librte_mempool_ring.a 00:03:30.687 [610/743] Linking target drivers/librte_mempool_ring.so.23.0 00:03:31.255 [611/743] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:03:31.514 [612/743] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:03:31.514 [613/743] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:03:31.514 [614/743] Linking static target drivers/net/i40e/base/libi40e_base.a 00:03:32.082 [615/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:03:32.082 [616/743] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:03:32.082 [617/743] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:03:32.649 [618/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:03:32.649 [619/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:03:32.649 [620/743] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:03:32.908 [621/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:03:32.908 [622/743] Generating drivers/rte_net_i40e_def with a custom command 00:03:32.908 [623/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:03:32.908 [624/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:03:33.167 [625/743] Generating drivers/rte_net_i40e_mingw with a custom command 00:03:34.103 [626/743] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:03:34.362 [627/743] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:03:34.362 [628/743] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:03:34.362 [629/743] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:03:34.362 [630/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:03:34.621 [631/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:03:34.621 [632/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:03:34.621 [633/743] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:03:34.621 [634/743] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:03:34.880 [635/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_avx2.c.o 00:03:34.880 [636/743] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:03:35.447 [637/743] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:03:35.447 [638/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:03:35.447 [639/743] Linking static target drivers/libtmp_rte_net_i40e.a 00:03:35.706 [640/743] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:03:35.706 [641/743] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:03:35.706 [642/743] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:35.706 [643/743] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:03:35.706 [644/743] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:03:35.706 [645/743] Linking static target lib/librte_vhost.a 00:03:35.966 [646/743] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:03:35.966 [647/743] Linking static target drivers/librte_net_i40e.a 00:03:35.966 [648/743] Compiling C object drivers/librte_net_i40e.so.23.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:03:35.966 [649/743] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:03:36.224 [650/743] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:03:36.482 [651/743] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:03:36.482 [652/743] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:03:36.741 [653/743] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:03:36.741 [654/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:03:36.741 [655/743] Linking target drivers/librte_net_i40e.so.23.0 00:03:36.741 [656/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:03:37.000 [657/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:03:37.000 [658/743] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:37.259 [659/743] Linking target lib/librte_vhost.so.23.0 00:03:37.259 [660/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:03:37.517 [661/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:03:37.517 [662/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:03:37.517 [663/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:03:37.517 [664/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:03:37.517 [665/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:03:37.776 [666/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:03:37.776 [667/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:03:37.776 [668/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:03:37.776 [669/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:03:38.036 [670/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:03:38.294 [671/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:03:38.295 [672/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:03:38.553 [673/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:03:39.118 [674/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:03:39.118 [675/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:03:39.118 [676/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:03:39.376 [677/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:03:39.634 [678/743] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:03:39.634 [679/743] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:03:39.634 [680/743] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:03:39.634 [681/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:03:39.890 [682/743] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:03:40.147 [683/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:03:40.147 [684/743] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:03:40.147 [685/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:03:40.405 [686/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:03:40.405 [687/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:03:40.405 [688/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:03:40.662 [689/743] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:03:40.662 [690/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:03:40.921 [691/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:03:40.921 [692/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:03:40.921 [693/743] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:03:40.921 [694/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:03:41.501 [695/743] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:03:41.501 [696/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:03:41.501 [697/743] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:03:41.760 [698/743] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:03:41.760 [699/743] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:03:41.760 [700/743] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:03:41.760 [701/743] Linking static target lib/librte_pipeline.a 00:03:42.325 [702/743] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:03:42.325 [703/743] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:03:42.325 [704/743] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:03:42.582 [705/743] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:03:42.840 [706/743] Linking target app/dpdk-dumpcap 00:03:42.840 [707/743] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:03:42.840 [708/743] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:03:42.840 [709/743] Linking target app/dpdk-pdump 00:03:42.840 [710/743] Linking target app/dpdk-proc-info 00:03:43.098 [711/743] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:03:43.098 [712/743] Linking target app/dpdk-test-acl 00:03:43.098 [713/743] Linking target app/dpdk-test-bbdev 00:03:43.356 [714/743] Linking target app/dpdk-test-compress-perf 00:03:43.356 [715/743] Linking target app/dpdk-test-crypto-perf 00:03:43.356 [716/743] Linking target app/dpdk-test-cmdline 00:03:43.356 [717/743] Linking target app/dpdk-test-eventdev 00:03:43.356 [718/743] Linking target app/dpdk-test-fib 00:03:43.614 [719/743] Linking target app/dpdk-test-gpudev 00:03:43.614 [720/743] Linking target app/dpdk-test-flow-perf 00:03:43.614 [721/743] Linking target app/dpdk-test-pipeline 00:03:44.178 [722/743] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:03:44.178 [723/743] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:03:44.178 [724/743] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:03:44.436 [725/743] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:03:44.436 [726/743] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:03:44.436 [727/743] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:03:44.436 [728/743] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:44.694 [729/743] Linking target lib/librte_pipeline.so.23.0 00:03:44.952 [730/743] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:03:45.210 [731/743] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:03:45.210 [732/743] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:03:45.210 [733/743] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:03:45.210 [734/743] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:03:45.210 [735/743] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:03:45.468 [736/743] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:03:45.726 [737/743] Linking target app/dpdk-test-sad 00:03:45.726 [738/743] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:03:45.726 [739/743] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:03:45.984 [740/743] Linking target app/dpdk-test-regex 00:03:45.984 [741/743] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:03:46.243 [742/743] Linking target app/dpdk-testpmd 00:03:46.501 [743/743] Linking target app/dpdk-test-security-perf 00:03:46.501 05:56:11 build_native_dpdk -- common/autobuild_common.sh@194 -- $ uname -s 00:03:46.501 05:56:11 build_native_dpdk -- common/autobuild_common.sh@194 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:03:46.501 05:56:11 build_native_dpdk -- common/autobuild_common.sh@207 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 install 00:03:46.501 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:03:46.501 [0/1] Installing files. 00:03:46.762 Installing subdir /home/vagrant/spdk_repo/dpdk/examples to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples 00:03:46.762 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:03:46.762 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:03:46.762 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:46.762 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:46.762 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:46.762 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/README to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:46.762 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/dummy.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:46.762 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t1.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:46.762 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t2.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:46.762 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t3.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:46.762 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:46.762 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:46.762 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:46.762 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:46.762 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:46.762 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:46.762 Installing /home/vagrant/spdk_repo/dpdk/examples/common/pkt_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common 00:03:46.762 Installing /home/vagrant/spdk_repo/dpdk/examples/common/altivec/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/altivec 00:03:46.762 Installing /home/vagrant/spdk_repo/dpdk/examples/common/neon/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/neon 00:03:46.762 Installing /home/vagrant/spdk_repo/dpdk/examples/common/sse/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/sse 00:03:46.762 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:03:46.762 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:03:46.762 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:03:46.762 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/dmafwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:03:46.762 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool 00:03:46.762 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:46.762 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:46.762 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:46.762 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:46.762 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:46.762 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:46.762 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:46.762 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:46.762 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:46.762 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:46.762 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:46.762 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:46.762 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:46.762 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:46.762 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:46.762 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:46.762 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:46.762 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_aes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:46.762 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ccm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:46.762 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_cmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:46.762 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:46.762 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_gcm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:46.762 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_hmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:46.762 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_rsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:46.762 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_sha.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:46.762 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_tdes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:46.762 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_xts.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:46.763 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:46.763 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_classify/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_classify 00:03:46.763 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_classify/flow_classify.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_classify 00:03:46.763 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_classify/ipv4_rules_file.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_classify 00:03:46.763 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:46.763 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/flow_blocks.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:46.763 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:46.763 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:03:46.763 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:03:46.763 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:46.763 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:46.763 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:46.763 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:46.763 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:46.763 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:46.763 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:46.763 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:46.763 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:46.763 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:46.763 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:46.763 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:46.763 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/kni.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:46.763 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/kni.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:46.763 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:46.763 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:46.763 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:46.763 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:46.763 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:46.763 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:46.763 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:46.763 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:46.763 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:46.763 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:46.763 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:46.763 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:46.763 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:46.763 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:46.763 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:46.763 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:46.763 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:46.763 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/firewall.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:46.763 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:46.763 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:46.763 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/kni.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:46.763 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:46.763 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:46.763 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:46.763 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:46.763 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/tap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:46.763 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:46.763 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:46.763 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:46.763 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep0.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:46.763 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep1.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:46.763 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:46.763 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:46.763 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:46.763 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:46.763 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:46.763 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:46.763 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipip.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:46.763 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:46.763 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:46.763 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:46.763 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:46.763 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:46.763 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:46.763 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_process.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:46.763 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:46.763 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:46.763 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:46.763 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:46.763 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/rt.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:46.763 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:46.763 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:46.764 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:46.764 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp4.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:46.764 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp6.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:46.764 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:46.764 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:46.764 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:46.764 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:46.764 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/linux_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:46.764 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/load_env.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:46.764 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:46.764 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:46.764 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/run_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:46.764 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:46.764 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:46.764 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:46.764 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:46.764 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:46.764 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:46.764 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:46.764 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:46.764 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:46.764 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:46.764 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:46.764 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:46.764 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:46.764 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:46.764 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:46.764 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:46.764 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:46.764 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:46.764 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:46.764 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:46.764 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:46.764 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:46.764 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:46.764 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:46.764 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:46.764 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:46.764 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:46.764 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:46.764 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:46.764 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:46.764 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:46.764 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:46.764 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:46.764 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:46.764 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:46.764 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:46.764 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:46.764 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:46.764 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:46.764 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:46.764 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:46.764 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:46.764 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:46.764 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:46.764 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:03:46.764 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:03:46.764 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:46.764 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:46.764 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:46.764 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:46.764 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:46.764 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:46.764 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:46.764 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:46.764 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:46.764 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:46.765 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:46.765 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:46.765 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:46.765 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:46.765 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:46.765 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:46.765 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:46.765 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:46.765 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:46.765 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:46.765 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:46.765 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:46.765 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:46.765 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:46.765 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:46.765 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:46.765 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:46.765 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_fib.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:46.765 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:46.765 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:46.765 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:46.765 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:46.765 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:46.765 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:46.765 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_route.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:46.765 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:46.765 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:46.765 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:46.765 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:46.765 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:46.765 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:46.765 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:46.765 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process 00:03:46.765 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:03:46.765 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:46.765 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:46.765 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:46.765 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:46.765 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:46.765 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:46.765 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:46.765 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:46.765 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:03:46.765 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:46.765 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:46.765 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:46.765 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:46.765 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:46.765 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:46.765 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:46.765 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:46.765 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:46.765 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:46.765 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:46.765 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/ntb_fwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:46.765 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:03:46.765 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:03:46.765 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:46.765 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:46.765 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:46.765 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:46.765 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:46.765 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:46.765 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:46.765 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:46.765 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:46.765 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:46.765 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ethdev.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:46.765 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:46.765 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:46.765 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:46.765 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:46.765 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_routing_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:46.765 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:46.766 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:46.766 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:46.766 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:46.766 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:46.766 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:46.766 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:47.026 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:47.026 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:47.026 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:47.026 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:47.026 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:47.026 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:47.026 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:47.026 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/packet.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:47.026 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/pcap.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:47.026 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:47.026 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:47.026 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:47.026 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:47.026 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:47.026 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:47.026 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:47.026 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:47.026 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:47.026 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:47.026 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:47.026 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:47.026 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:47.026 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:47.026 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:03:47.026 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/ptpclient.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:03:47.026 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:47.026 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:47.026 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:47.026 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:47.026 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:47.026 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:47.026 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/app_thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:47.026 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:47.026 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:47.026 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:47.026 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cmdline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:47.026 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:47.026 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:47.026 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:47.026 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:47.026 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_ov.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:47.026 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_pie.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:47.026 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_red.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:47.026 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/stats.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:47.026 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:47.026 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:47.026 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd 00:03:47.026 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/node/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/node 00:03:47.026 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/node/node.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/node 00:03:47.026 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:47.026 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:47.026 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:47.026 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:47.026 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:47.026 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:47.026 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:03:47.026 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:03:47.026 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:03:47.026 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:03:47.026 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/basicfwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:03:47.026 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:03:47.026 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:03:47.026 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:47.026 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:47.026 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/vdpa_blk_compact.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:47.027 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:47.027 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:47.027 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:47.027 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/virtio_net.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:47.027 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:47.027 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:47.027 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk_spec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:47.027 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:47.027 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:47.027 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk_compat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:47.027 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:47.027 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:47.027 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:47.027 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:47.027 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:47.027 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:47.027 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:47.027 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:47.027 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:47.027 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:47.027 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:47.027 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:47.027 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:47.027 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:47.027 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:47.027 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:47.027 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:47.027 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:47.027 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:47.027 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:47.027 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:47.027 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:47.027 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:47.027 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:03:47.027 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:03:47.027 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:47.027 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:47.027 Installing lib/librte_kvargs.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:47.027 Installing lib/librte_kvargs.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:47.027 Installing lib/librte_telemetry.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:47.027 Installing lib/librte_telemetry.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:47.027 Installing lib/librte_eal.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:47.027 Installing lib/librte_eal.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:47.027 Installing lib/librte_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:47.027 Installing lib/librte_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:47.027 Installing lib/librte_rcu.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:47.027 Installing lib/librte_rcu.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:47.027 Installing lib/librte_mempool.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:47.027 Installing lib/librte_mempool.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:47.027 Installing lib/librte_mbuf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:47.027 Installing lib/librte_mbuf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:47.027 Installing lib/librte_net.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:47.027 Installing lib/librte_net.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:47.027 Installing lib/librte_meter.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:47.027 Installing lib/librte_meter.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:47.027 Installing lib/librte_ethdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:47.027 Installing lib/librte_ethdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:47.027 Installing lib/librte_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:47.027 Installing lib/librte_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:47.027 Installing lib/librte_cmdline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:47.027 Installing lib/librte_cmdline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:47.027 Installing lib/librte_metrics.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:47.027 Installing lib/librte_metrics.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:47.027 Installing lib/librte_hash.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:47.027 Installing lib/librte_hash.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:47.027 Installing lib/librte_timer.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:47.027 Installing lib/librte_timer.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:47.027 Installing lib/librte_acl.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:47.027 Installing lib/librte_acl.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:47.027 Installing lib/librte_bbdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:47.027 Installing lib/librte_bbdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:47.027 Installing lib/librte_bitratestats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:47.027 Installing lib/librte_bitratestats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:47.027 Installing lib/librte_bpf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:47.027 Installing lib/librte_bpf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:47.027 Installing lib/librte_cfgfile.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:47.027 Installing lib/librte_cfgfile.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:47.027 Installing lib/librte_compressdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:47.027 Installing lib/librte_compressdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:47.027 Installing lib/librte_cryptodev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:47.027 Installing lib/librte_cryptodev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:47.027 Installing lib/librte_distributor.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:47.027 Installing lib/librte_distributor.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:47.027 Installing lib/librte_efd.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:47.027 Installing lib/librte_efd.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:47.027 Installing lib/librte_eventdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:47.027 Installing lib/librte_eventdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:47.027 Installing lib/librte_gpudev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:47.027 Installing lib/librte_gpudev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:47.027 Installing lib/librte_gro.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:47.027 Installing lib/librte_gro.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:47.027 Installing lib/librte_gso.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:47.027 Installing lib/librte_gso.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:47.027 Installing lib/librte_ip_frag.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:47.027 Installing lib/librte_ip_frag.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:47.027 Installing lib/librte_jobstats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:47.027 Installing lib/librte_jobstats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:47.028 Installing lib/librte_latencystats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:47.028 Installing lib/librte_latencystats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:47.028 Installing lib/librte_lpm.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:47.028 Installing lib/librte_lpm.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:47.028 Installing lib/librte_member.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:47.028 Installing lib/librte_member.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:47.028 Installing lib/librte_pcapng.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:47.028 Installing lib/librte_pcapng.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:47.028 Installing lib/librte_power.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:47.028 Installing lib/librte_power.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:47.028 Installing lib/librte_rawdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:47.028 Installing lib/librte_rawdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:47.028 Installing lib/librte_regexdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:47.028 Installing lib/librte_regexdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:47.028 Installing lib/librte_dmadev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:47.028 Installing lib/librte_dmadev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:47.028 Installing lib/librte_rib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:47.028 Installing lib/librte_rib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:47.028 Installing lib/librte_reorder.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:47.028 Installing lib/librte_reorder.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:47.028 Installing lib/librte_sched.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:47.028 Installing lib/librte_sched.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:47.028 Installing lib/librte_security.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:47.028 Installing lib/librte_security.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:47.028 Installing lib/librte_stack.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:47.028 Installing lib/librte_stack.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:47.028 Installing lib/librte_vhost.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:47.028 Installing lib/librte_vhost.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:47.028 Installing lib/librte_ipsec.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:47.028 Installing lib/librte_ipsec.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:47.028 Installing lib/librte_fib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:47.028 Installing lib/librte_fib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:47.028 Installing lib/librte_port.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:47.028 Installing lib/librte_port.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:47.028 Installing lib/librte_pdump.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:47.028 Installing lib/librte_pdump.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:47.028 Installing lib/librte_table.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:47.028 Installing lib/librte_table.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:47.028 Installing lib/librte_pipeline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:47.028 Installing lib/librte_pipeline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:47.028 Installing lib/librte_graph.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:47.028 Installing lib/librte_graph.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:47.028 Installing lib/librte_node.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:47.028 Installing lib/librte_node.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:47.028 Installing drivers/librte_bus_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:47.028 Installing drivers/librte_bus_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:03:47.028 Installing drivers/librte_bus_vdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:47.028 Installing drivers/librte_bus_vdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:03:47.028 Installing drivers/librte_mempool_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:47.028 Installing drivers/librte_mempool_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:03:47.028 Installing drivers/librte_net_i40e.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:47.028 Installing drivers/librte_net_i40e.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:03:47.028 Installing app/dpdk-dumpcap to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:47.028 Installing app/dpdk-pdump to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:47.028 Installing app/dpdk-proc-info to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:47.028 Installing app/dpdk-test-acl to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:47.028 Installing app/dpdk-test-bbdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:47.028 Installing app/dpdk-test-cmdline to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:47.028 Installing app/dpdk-test-compress-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:47.028 Installing app/dpdk-test-crypto-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:47.028 Installing app/dpdk-test-eventdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:47.289 Installing app/dpdk-test-fib to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:47.289 Installing app/dpdk-test-flow-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:47.289 Installing app/dpdk-test-gpudev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:47.289 Installing app/dpdk-test-pipeline to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:47.289 Installing app/dpdk-testpmd to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:47.289 Installing app/dpdk-test-regex to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:47.289 Installing app/dpdk-test-sad to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:47.289 Installing app/dpdk-test-security-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:47.289 Installing /home/vagrant/spdk_repo/dpdk/config/rte_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.289 Installing /home/vagrant/spdk_repo/dpdk/lib/kvargs/rte_kvargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.289 Installing /home/vagrant/spdk_repo/dpdk/lib/telemetry/rte_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.289 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:47.289 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:47.289 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:47.289 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:47.289 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:47.289 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:47.289 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:47.289 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:47.289 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:47.289 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:47.289 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:47.289 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:47.289 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.289 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.289 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.289 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.289 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.289 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.289 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.289 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.289 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.289 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rtm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.289 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.289 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.289 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.289 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.289 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.289 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.289 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.289 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_alarm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.289 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitmap.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.289 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.289 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_branch_prediction.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.289 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bus.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.289 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_class.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.289 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.290 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_compat.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.290 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_debug.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.290 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_dev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.290 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_devargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.290 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.290 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_memconfig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.290 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.290 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_errno.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.290 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_epoll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.290 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_fbarray.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.290 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hexdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.290 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hypervisor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.290 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_interrupts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.290 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_keepalive.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.290 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_launch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.290 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.290 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_log.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.290 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_malloc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.290 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_mcslock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.290 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memory.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.290 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memzone.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.290 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.290 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_features.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.290 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_per_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.290 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pflock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.290 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_random.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.290 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_reciprocal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.290 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqcount.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.290 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.290 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.290 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service_component.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.290 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_string_fns.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.290 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_tailq.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.290 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_thread.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.290 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_ticketlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.290 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_time.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.290 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.290 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.290 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point_register.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.290 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_uuid.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.290 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_version.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.290 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_vfio.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.290 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/linux/include/rte_os.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.290 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.290 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.290 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.290 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.290 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_c11_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.290 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_generic_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.290 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.290 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.290 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.290 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.290 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_zc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.290 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.290 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.290 Installing /home/vagrant/spdk_repo/dpdk/lib/rcu/rte_rcu_qsbr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.290 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.290 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.290 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.290 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.290 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.290 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_ptype.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.290 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.290 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_dyn.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.290 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ip.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.290 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.290 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_udp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.290 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_esp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.290 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_sctp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.290 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_icmp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.290 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_arp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.290 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ether.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.290 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_macsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.290 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_vxlan.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.290 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gre.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.290 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gtp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.290 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.290 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.290 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_mpls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.290 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_higig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.290 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ecpri.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.290 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_geneve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.290 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_l2tpv2.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.290 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ppp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.290 Installing /home/vagrant/spdk_repo/dpdk/lib/meter/rte_meter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.290 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_cman.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.290 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.290 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.290 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.290 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_dev_info.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.291 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.291 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.291 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.291 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.291 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.291 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.291 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.291 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_eth_ctrl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.291 Installing /home/vagrant/spdk_repo/dpdk/lib/pci/rte_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.291 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.291 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.291 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_num.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.291 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.291 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.291 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_string.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.291 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_rdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.291 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_vt100.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.291 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_socket.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.291 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_cirbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.291 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_portlist.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.291 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.291 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.291 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_fbk_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.291 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.291 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.291 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_jhash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.291 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.291 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.291 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.291 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.291 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_sw.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.291 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.291 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_x86_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.291 Installing /home/vagrant/spdk_repo/dpdk/lib/timer/rte_timer.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.291 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.291 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl_osdep.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.291 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.291 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.291 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_op.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.291 Installing /home/vagrant/spdk_repo/dpdk/lib/bitratestats/rte_bitrate.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.291 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/bpf_def.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.291 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.291 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.291 Installing /home/vagrant/spdk_repo/dpdk/lib/cfgfile/rte_cfgfile.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.291 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_compressdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.291 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_comp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.291 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.291 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.291 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.291 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.291 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_sym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.291 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_asym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.291 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.291 Installing /home/vagrant/spdk_repo/dpdk/lib/distributor/rte_distributor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.291 Installing /home/vagrant/spdk_repo/dpdk/lib/efd/rte_efd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.291 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.291 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.291 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.291 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.291 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_timer_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.291 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.291 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.291 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.291 Installing /home/vagrant/spdk_repo/dpdk/lib/gpudev/rte_gpudev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.291 Installing /home/vagrant/spdk_repo/dpdk/lib/gro/rte_gro.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.291 Installing /home/vagrant/spdk_repo/dpdk/lib/gso/rte_gso.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.291 Installing /home/vagrant/spdk_repo/dpdk/lib/ip_frag/rte_ip_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.291 Installing /home/vagrant/spdk_repo/dpdk/lib/jobstats/rte_jobstats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.291 Installing /home/vagrant/spdk_repo/dpdk/lib/latencystats/rte_latencystats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.291 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.291 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.291 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.291 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.291 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_scalar.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.291 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.291 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.291 Installing /home/vagrant/spdk_repo/dpdk/lib/member/rte_member.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.291 Installing /home/vagrant/spdk_repo/dpdk/lib/pcapng/rte_pcapng.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.291 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.291 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_empty_poll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.291 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_intel_uncore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.291 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_pmd_mgmt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.291 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_guest_channel.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.291 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.291 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.292 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.292 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.292 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.292 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.292 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.292 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.292 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.292 Installing /home/vagrant/spdk_repo/dpdk/lib/reorder/rte_reorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.292 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_approx.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.292 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_red.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.292 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.292 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.292 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_pie.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.292 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.292 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.292 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.292 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_std.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.292 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.292 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.292 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_c11.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.292 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_stubs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.292 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vdpa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.292 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.292 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_async.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.292 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.292 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.292 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.292 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sad.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.292 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.292 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.292 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.292 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.292 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.292 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.292 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ras.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.292 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.292 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.292 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.292 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.292 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sym_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.292 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.292 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.292 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.292 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.292 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.292 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.292 Installing /home/vagrant/spdk_repo/dpdk/lib/pdump/rte_pdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.292 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.292 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.292 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.292 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_em.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.292 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_learner.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.292 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_selector.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.292 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_wm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.292 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.292 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.292 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_array.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.292 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.292 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_cuckoo.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.292 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.292 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.292 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm_ipv6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.292 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_stub.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.292 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.292 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.292 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.292 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.292 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_port_in_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.292 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_table_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.292 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.292 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_extern.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.292 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ctl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.292 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.292 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.292 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip4_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.292 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_eth_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.292 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/pci/rte_bus_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.292 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.292 Installing /home/vagrant/spdk_repo/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.292 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-devbind.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:47.292 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-pmdinfo.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:47.292 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-telemetry.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:47.292 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-hugepages.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:47.292 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/rte_build_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:47.292 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:03:47.292 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:03:47.292 Installing symlink pointing to librte_kvargs.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so.23 00:03:47.293 Installing symlink pointing to librte_kvargs.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so 00:03:47.293 Installing symlink pointing to librte_telemetry.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so.23 00:03:47.293 Installing symlink pointing to librte_telemetry.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so 00:03:47.293 Installing symlink pointing to librte_eal.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so.23 00:03:47.293 Installing symlink pointing to librte_eal.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so 00:03:47.293 Installing symlink pointing to librte_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so.23 00:03:47.293 Installing symlink pointing to librte_ring.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so 00:03:47.293 Installing symlink pointing to librte_rcu.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so.23 00:03:47.293 Installing symlink pointing to librte_rcu.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so 00:03:47.293 Installing symlink pointing to librte_mempool.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so.23 00:03:47.293 Installing symlink pointing to librte_mempool.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so 00:03:47.293 Installing symlink pointing to librte_mbuf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so.23 00:03:47.293 Installing symlink pointing to librte_mbuf.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so 00:03:47.293 Installing symlink pointing to librte_net.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so.23 00:03:47.293 Installing symlink pointing to librte_net.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so 00:03:47.293 Installing symlink pointing to librte_meter.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so.23 00:03:47.293 Installing symlink pointing to librte_meter.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so 00:03:47.293 Installing symlink pointing to librte_ethdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so.23 00:03:47.293 Installing symlink pointing to librte_ethdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so 00:03:47.293 Installing symlink pointing to librte_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so.23 00:03:47.293 Installing symlink pointing to librte_pci.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so 00:03:47.293 Installing symlink pointing to librte_cmdline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so.23 00:03:47.293 Installing symlink pointing to librte_cmdline.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so 00:03:47.293 Installing symlink pointing to librte_metrics.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so.23 00:03:47.293 Installing symlink pointing to librte_metrics.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so 00:03:47.293 Installing symlink pointing to librte_hash.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so.23 00:03:47.293 Installing symlink pointing to librte_hash.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so 00:03:47.293 Installing symlink pointing to librte_timer.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so.23 00:03:47.293 Installing symlink pointing to librte_timer.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so 00:03:47.293 Installing symlink pointing to librte_acl.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so.23 00:03:47.293 Installing symlink pointing to librte_acl.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so 00:03:47.293 Installing symlink pointing to librte_bbdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so.23 00:03:47.293 Installing symlink pointing to librte_bbdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so 00:03:47.293 Installing symlink pointing to librte_bitratestats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so.23 00:03:47.293 Installing symlink pointing to librte_bitratestats.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so 00:03:47.293 Installing symlink pointing to librte_bpf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so.23 00:03:47.293 Installing symlink pointing to librte_bpf.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so 00:03:47.293 Installing symlink pointing to librte_cfgfile.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so.23 00:03:47.293 Installing symlink pointing to librte_cfgfile.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so 00:03:47.293 Installing symlink pointing to librte_compressdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so.23 00:03:47.293 Installing symlink pointing to librte_compressdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so 00:03:47.293 Installing symlink pointing to librte_cryptodev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so.23 00:03:47.293 Installing symlink pointing to librte_cryptodev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so 00:03:47.293 Installing symlink pointing to librte_distributor.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so.23 00:03:47.293 Installing symlink pointing to librte_distributor.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so 00:03:47.293 Installing symlink pointing to librte_efd.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so.23 00:03:47.293 Installing symlink pointing to librte_efd.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so 00:03:47.293 Installing symlink pointing to librte_eventdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so.23 00:03:47.293 Installing symlink pointing to librte_eventdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so 00:03:47.293 Installing symlink pointing to librte_gpudev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so.23 00:03:47.293 './librte_bus_pci.so' -> 'dpdk/pmds-23.0/librte_bus_pci.so' 00:03:47.293 './librte_bus_pci.so.23' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23' 00:03:47.293 './librte_bus_pci.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23.0' 00:03:47.293 './librte_bus_vdev.so' -> 'dpdk/pmds-23.0/librte_bus_vdev.so' 00:03:47.293 './librte_bus_vdev.so.23' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23' 00:03:47.293 './librte_bus_vdev.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23.0' 00:03:47.293 './librte_mempool_ring.so' -> 'dpdk/pmds-23.0/librte_mempool_ring.so' 00:03:47.293 './librte_mempool_ring.so.23' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23' 00:03:47.293 './librte_mempool_ring.so.23.0' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23.0' 00:03:47.293 './librte_net_i40e.so' -> 'dpdk/pmds-23.0/librte_net_i40e.so' 00:03:47.293 './librte_net_i40e.so.23' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23' 00:03:47.293 './librte_net_i40e.so.23.0' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23.0' 00:03:47.293 Installing symlink pointing to librte_gpudev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so 00:03:47.293 Installing symlink pointing to librte_gro.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so.23 00:03:47.293 Installing symlink pointing to librte_gro.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so 00:03:47.293 Installing symlink pointing to librte_gso.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so.23 00:03:47.293 Installing symlink pointing to librte_gso.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so 00:03:47.293 Installing symlink pointing to librte_ip_frag.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so.23 00:03:47.293 Installing symlink pointing to librte_ip_frag.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so 00:03:47.293 Installing symlink pointing to librte_jobstats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so.23 00:03:47.293 Installing symlink pointing to librte_jobstats.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so 00:03:47.293 Installing symlink pointing to librte_latencystats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so.23 00:03:47.293 Installing symlink pointing to librte_latencystats.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so 00:03:47.293 Installing symlink pointing to librte_lpm.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so.23 00:03:47.293 Installing symlink pointing to librte_lpm.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so 00:03:47.293 Installing symlink pointing to librte_member.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so.23 00:03:47.293 Installing symlink pointing to librte_member.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so 00:03:47.293 Installing symlink pointing to librte_pcapng.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so.23 00:03:47.293 Installing symlink pointing to librte_pcapng.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so 00:03:47.293 Installing symlink pointing to librte_power.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so.23 00:03:47.293 Installing symlink pointing to librte_power.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so 00:03:47.293 Installing symlink pointing to librte_rawdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so.23 00:03:47.293 Installing symlink pointing to librte_rawdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so 00:03:47.293 Installing symlink pointing to librte_regexdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so.23 00:03:47.293 Installing symlink pointing to librte_regexdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so 00:03:47.293 Installing symlink pointing to librte_dmadev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so.23 00:03:47.293 Installing symlink pointing to librte_dmadev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so 00:03:47.293 Installing symlink pointing to librte_rib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so.23 00:03:47.293 Installing symlink pointing to librte_rib.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so 00:03:47.293 Installing symlink pointing to librte_reorder.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so.23 00:03:47.293 Installing symlink pointing to librte_reorder.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so 00:03:47.293 Installing symlink pointing to librte_sched.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so.23 00:03:47.293 Installing symlink pointing to librte_sched.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so 00:03:47.293 Installing symlink pointing to librte_security.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so.23 00:03:47.294 Installing symlink pointing to librte_security.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so 00:03:47.294 Installing symlink pointing to librte_stack.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so.23 00:03:47.294 Installing symlink pointing to librte_stack.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so 00:03:47.294 Installing symlink pointing to librte_vhost.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so.23 00:03:47.294 Installing symlink pointing to librte_vhost.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so 00:03:47.294 Installing symlink pointing to librte_ipsec.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so.23 00:03:47.294 Installing symlink pointing to librte_ipsec.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so 00:03:47.294 Installing symlink pointing to librte_fib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so.23 00:03:47.294 Installing symlink pointing to librte_fib.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so 00:03:47.294 Installing symlink pointing to librte_port.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so.23 00:03:47.294 Installing symlink pointing to librte_port.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so 00:03:47.294 Installing symlink pointing to librte_pdump.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so.23 00:03:47.294 Installing symlink pointing to librte_pdump.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so 00:03:47.294 Installing symlink pointing to librte_table.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so.23 00:03:47.294 Installing symlink pointing to librte_table.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so 00:03:47.294 Installing symlink pointing to librte_pipeline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so.23 00:03:47.294 Installing symlink pointing to librte_pipeline.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so 00:03:47.294 Installing symlink pointing to librte_graph.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so.23 00:03:47.294 Installing symlink pointing to librte_graph.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so 00:03:47.294 Installing symlink pointing to librte_node.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so.23 00:03:47.294 Installing symlink pointing to librte_node.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so 00:03:47.294 Installing symlink pointing to librte_bus_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23 00:03:47.294 Installing symlink pointing to librte_bus_pci.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:03:47.294 Installing symlink pointing to librte_bus_vdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23 00:03:47.294 Installing symlink pointing to librte_bus_vdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:03:47.294 Installing symlink pointing to librte_mempool_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23 00:03:47.294 Installing symlink pointing to librte_mempool_ring.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:03:47.294 Installing symlink pointing to librte_net_i40e.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23 00:03:47.294 Installing symlink pointing to librte_net_i40e.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:03:47.294 Running custom install script '/bin/sh /home/vagrant/spdk_repo/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-23.0' 00:03:47.294 05:56:12 build_native_dpdk -- common/autobuild_common.sh@213 -- $ cat 00:03:47.294 05:56:12 build_native_dpdk -- common/autobuild_common.sh@218 -- $ cd /home/vagrant/spdk_repo/spdk 00:03:47.294 00:03:47.294 real 0m51.558s 00:03:47.294 user 6m8.408s 00:03:47.294 sys 0m54.637s 00:03:47.294 05:56:12 build_native_dpdk -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:03:47.294 05:56:12 build_native_dpdk -- common/autotest_common.sh@10 -- $ set +x 00:03:47.294 ************************************ 00:03:47.294 END TEST build_native_dpdk 00:03:47.294 ************************************ 00:03:47.552 05:56:12 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:03:47.552 05:56:12 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:03:47.552 05:56:12 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:03:47.552 05:56:12 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:03:47.552 05:56:12 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:03:47.552 05:56:12 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:03:47.552 05:56:12 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:03:47.552 05:56:12 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-shared 00:03:47.552 Using /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig for additional libs... 00:03:47.811 DPDK libraries: /home/vagrant/spdk_repo/dpdk/build/lib 00:03:47.811 DPDK includes: //home/vagrant/spdk_repo/dpdk/build/include 00:03:47.811 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:03:48.069 Using 'verbs' RDMA provider 00:04:01.218 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:04:16.174 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:04:16.174 Creating mk/config.mk...done. 00:04:16.174 Creating mk/cc.flags.mk...done. 00:04:16.174 Type 'make' to build. 00:04:16.174 05:56:40 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:04:16.174 05:56:40 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:04:16.174 05:56:40 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:04:16.174 05:56:40 -- common/autotest_common.sh@10 -- $ set +x 00:04:16.174 ************************************ 00:04:16.174 START TEST make 00:04:16.174 ************************************ 00:04:16.175 05:56:40 make -- common/autotest_common.sh@1125 -- $ make -j10 00:04:16.175 make[1]: Nothing to be done for 'all'. 00:05:12.402 CC lib/ut_mock/mock.o 00:05:12.402 CC lib/ut/ut.o 00:05:12.402 CC lib/log/log_deprecated.o 00:05:12.402 CC lib/log/log_flags.o 00:05:12.402 CC lib/log/log.o 00:05:12.402 LIB libspdk_ut_mock.a 00:05:12.402 LIB libspdk_log.a 00:05:12.402 LIB libspdk_ut.a 00:05:12.402 SO libspdk_ut_mock.so.6.0 00:05:12.402 SO libspdk_ut.so.2.0 00:05:12.402 SO libspdk_log.so.7.0 00:05:12.402 SYMLINK libspdk_ut_mock.so 00:05:12.402 SYMLINK libspdk_ut.so 00:05:12.402 SYMLINK libspdk_log.so 00:05:12.402 CC lib/util/base64.o 00:05:12.402 CC lib/util/bit_array.o 00:05:12.402 CC lib/util/crc16.o 00:05:12.402 CC lib/util/cpuset.o 00:05:12.402 CC lib/util/crc32.o 00:05:12.402 CC lib/util/crc32c.o 00:05:12.402 CC lib/ioat/ioat.o 00:05:12.402 CC lib/dma/dma.o 00:05:12.402 CXX lib/trace_parser/trace.o 00:05:12.402 CC lib/vfio_user/host/vfio_user_pci.o 00:05:12.402 CC lib/util/crc32_ieee.o 00:05:12.402 CC lib/util/crc64.o 00:05:12.402 CC lib/vfio_user/host/vfio_user.o 00:05:12.402 CC lib/util/dif.o 00:05:12.402 CC lib/util/fd.o 00:05:12.402 LIB libspdk_dma.a 00:05:12.402 CC lib/util/fd_group.o 00:05:12.402 SO libspdk_dma.so.5.0 00:05:12.402 CC lib/util/file.o 00:05:12.402 LIB libspdk_ioat.a 00:05:12.402 CC lib/util/hexlify.o 00:05:12.402 SYMLINK libspdk_dma.so 00:05:12.402 CC lib/util/iov.o 00:05:12.402 SO libspdk_ioat.so.7.0 00:05:12.402 CC lib/util/math.o 00:05:12.402 SYMLINK libspdk_ioat.so 00:05:12.402 CC lib/util/net.o 00:05:12.402 CC lib/util/pipe.o 00:05:12.402 LIB libspdk_vfio_user.a 00:05:12.402 SO libspdk_vfio_user.so.5.0 00:05:12.402 CC lib/util/strerror_tls.o 00:05:12.402 SYMLINK libspdk_vfio_user.so 00:05:12.402 CC lib/util/string.o 00:05:12.402 CC lib/util/uuid.o 00:05:12.402 CC lib/util/xor.o 00:05:12.402 CC lib/util/zipf.o 00:05:12.402 CC lib/util/md5.o 00:05:12.402 LIB libspdk_util.a 00:05:12.402 SO libspdk_util.so.10.0 00:05:12.402 LIB libspdk_trace_parser.a 00:05:12.402 SO libspdk_trace_parser.so.6.0 00:05:12.402 SYMLINK libspdk_util.so 00:05:12.402 SYMLINK libspdk_trace_parser.so 00:05:12.402 CC lib/rdma_utils/rdma_utils.o 00:05:12.402 CC lib/rdma_provider/common.o 00:05:12.402 CC lib/rdma_provider/rdma_provider_verbs.o 00:05:12.402 CC lib/env_dpdk/memory.o 00:05:12.402 CC lib/env_dpdk/pci.o 00:05:12.402 CC lib/env_dpdk/env.o 00:05:12.402 CC lib/vmd/vmd.o 00:05:12.402 CC lib/json/json_parse.o 00:05:12.402 CC lib/conf/conf.o 00:05:12.402 CC lib/idxd/idxd.o 00:05:12.402 LIB libspdk_rdma_provider.a 00:05:12.402 CC lib/idxd/idxd_user.o 00:05:12.402 LIB libspdk_conf.a 00:05:12.402 SO libspdk_rdma_provider.so.6.0 00:05:12.402 SO libspdk_conf.so.6.0 00:05:12.402 CC lib/json/json_util.o 00:05:12.402 LIB libspdk_rdma_utils.a 00:05:12.402 SYMLINK libspdk_conf.so 00:05:12.402 CC lib/json/json_write.o 00:05:12.402 SYMLINK libspdk_rdma_provider.so 00:05:12.402 CC lib/vmd/led.o 00:05:12.402 SO libspdk_rdma_utils.so.1.0 00:05:12.402 CC lib/env_dpdk/init.o 00:05:12.402 CC lib/env_dpdk/threads.o 00:05:12.402 SYMLINK libspdk_rdma_utils.so 00:05:12.402 CC lib/env_dpdk/pci_ioat.o 00:05:12.402 CC lib/idxd/idxd_kernel.o 00:05:12.402 CC lib/env_dpdk/pci_virtio.o 00:05:12.402 CC lib/env_dpdk/pci_vmd.o 00:05:12.402 CC lib/env_dpdk/pci_idxd.o 00:05:12.402 CC lib/env_dpdk/pci_event.o 00:05:12.402 LIB libspdk_json.a 00:05:12.402 CC lib/env_dpdk/sigbus_handler.o 00:05:12.402 SO libspdk_json.so.6.0 00:05:12.402 CC lib/env_dpdk/pci_dpdk.o 00:05:12.402 CC lib/env_dpdk/pci_dpdk_2207.o 00:05:12.402 LIB libspdk_vmd.a 00:05:12.402 SYMLINK libspdk_json.so 00:05:12.402 CC lib/env_dpdk/pci_dpdk_2211.o 00:05:12.402 SO libspdk_vmd.so.6.0 00:05:12.402 LIB libspdk_idxd.a 00:05:12.402 SYMLINK libspdk_vmd.so 00:05:12.402 SO libspdk_idxd.so.12.1 00:05:12.402 SYMLINK libspdk_idxd.so 00:05:12.402 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:05:12.402 CC lib/jsonrpc/jsonrpc_server.o 00:05:12.402 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:05:12.402 CC lib/jsonrpc/jsonrpc_client.o 00:05:12.402 LIB libspdk_jsonrpc.a 00:05:12.402 SO libspdk_jsonrpc.so.6.0 00:05:12.402 SYMLINK libspdk_jsonrpc.so 00:05:12.662 LIB libspdk_env_dpdk.a 00:05:12.662 CC lib/rpc/rpc.o 00:05:12.662 SO libspdk_env_dpdk.so.15.0 00:05:12.921 SYMLINK libspdk_env_dpdk.so 00:05:12.921 LIB libspdk_rpc.a 00:05:12.921 SO libspdk_rpc.so.6.0 00:05:13.179 SYMLINK libspdk_rpc.so 00:05:13.179 CC lib/trace/trace_flags.o 00:05:13.179 CC lib/trace/trace.o 00:05:13.179 CC lib/trace/trace_rpc.o 00:05:13.179 CC lib/notify/notify.o 00:05:13.179 CC lib/notify/notify_rpc.o 00:05:13.179 CC lib/keyring/keyring.o 00:05:13.179 CC lib/keyring/keyring_rpc.o 00:05:13.438 LIB libspdk_notify.a 00:05:13.438 SO libspdk_notify.so.6.0 00:05:13.695 SYMLINK libspdk_notify.so 00:05:13.695 LIB libspdk_keyring.a 00:05:13.695 SO libspdk_keyring.so.2.0 00:05:13.695 LIB libspdk_trace.a 00:05:13.695 SO libspdk_trace.so.11.0 00:05:13.695 SYMLINK libspdk_keyring.so 00:05:13.695 SYMLINK libspdk_trace.so 00:05:13.953 CC lib/sock/sock.o 00:05:13.953 CC lib/sock/sock_rpc.o 00:05:13.953 CC lib/thread/thread.o 00:05:13.953 CC lib/thread/iobuf.o 00:05:14.521 LIB libspdk_sock.a 00:05:14.521 SO libspdk_sock.so.10.0 00:05:14.521 SYMLINK libspdk_sock.so 00:05:14.779 CC lib/nvme/nvme_ctrlr.o 00:05:14.779 CC lib/nvme/nvme_ctrlr_cmd.o 00:05:14.779 CC lib/nvme/nvme_fabric.o 00:05:14.779 CC lib/nvme/nvme_ns_cmd.o 00:05:14.779 CC lib/nvme/nvme_ns.o 00:05:14.779 CC lib/nvme/nvme_pcie_common.o 00:05:14.779 CC lib/nvme/nvme_pcie.o 00:05:14.779 CC lib/nvme/nvme_qpair.o 00:05:14.779 CC lib/nvme/nvme.o 00:05:15.715 LIB libspdk_thread.a 00:05:15.715 CC lib/nvme/nvme_quirks.o 00:05:15.715 SO libspdk_thread.so.10.1 00:05:15.715 CC lib/nvme/nvme_transport.o 00:05:15.715 CC lib/nvme/nvme_discovery.o 00:05:15.715 SYMLINK libspdk_thread.so 00:05:15.715 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:05:15.715 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:05:15.715 CC lib/nvme/nvme_tcp.o 00:05:15.974 CC lib/nvme/nvme_opal.o 00:05:15.974 CC lib/nvme/nvme_io_msg.o 00:05:15.974 CC lib/nvme/nvme_poll_group.o 00:05:16.233 CC lib/nvme/nvme_zns.o 00:05:16.491 CC lib/nvme/nvme_stubs.o 00:05:16.491 CC lib/nvme/nvme_auth.o 00:05:16.491 CC lib/accel/accel.o 00:05:16.491 CC lib/blob/blobstore.o 00:05:16.749 CC lib/blob/request.o 00:05:16.749 CC lib/init/json_config.o 00:05:16.749 CC lib/nvme/nvme_cuse.o 00:05:17.006 CC lib/blob/zeroes.o 00:05:17.006 CC lib/blob/blob_bs_dev.o 00:05:17.006 CC lib/init/subsystem.o 00:05:17.006 CC lib/init/subsystem_rpc.o 00:05:17.006 CC lib/init/rpc.o 00:05:17.265 CC lib/nvme/nvme_rdma.o 00:05:17.265 CC lib/accel/accel_rpc.o 00:05:17.265 CC lib/accel/accel_sw.o 00:05:17.265 CC lib/virtio/virtio.o 00:05:17.265 LIB libspdk_init.a 00:05:17.265 CC lib/virtio/virtio_vhost_user.o 00:05:17.523 SO libspdk_init.so.6.0 00:05:17.523 SYMLINK libspdk_init.so 00:05:17.523 CC lib/virtio/virtio_vfio_user.o 00:05:17.523 CC lib/virtio/virtio_pci.o 00:05:17.782 LIB libspdk_accel.a 00:05:17.782 CC lib/fsdev/fsdev.o 00:05:17.782 CC lib/fsdev/fsdev_io.o 00:05:17.782 CC lib/fsdev/fsdev_rpc.o 00:05:17.782 SO libspdk_accel.so.16.0 00:05:17.782 CC lib/event/app.o 00:05:17.782 CC lib/event/reactor.o 00:05:17.782 CC lib/event/log_rpc.o 00:05:17.782 SYMLINK libspdk_accel.so 00:05:17.782 CC lib/event/app_rpc.o 00:05:17.782 LIB libspdk_virtio.a 00:05:17.782 SO libspdk_virtio.so.7.0 00:05:18.040 CC lib/event/scheduler_static.o 00:05:18.041 SYMLINK libspdk_virtio.so 00:05:18.041 CC lib/bdev/bdev.o 00:05:18.041 CC lib/bdev/bdev_zone.o 00:05:18.041 CC lib/bdev/bdev_rpc.o 00:05:18.041 CC lib/bdev/part.o 00:05:18.041 CC lib/bdev/scsi_nvme.o 00:05:18.300 LIB libspdk_event.a 00:05:18.300 SO libspdk_event.so.14.0 00:05:18.300 SYMLINK libspdk_event.so 00:05:18.300 LIB libspdk_fsdev.a 00:05:18.559 SO libspdk_fsdev.so.1.0 00:05:18.559 SYMLINK libspdk_fsdev.so 00:05:18.559 LIB libspdk_nvme.a 00:05:18.817 SO libspdk_nvme.so.14.0 00:05:18.817 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:05:19.076 SYMLINK libspdk_nvme.so 00:05:19.334 LIB libspdk_fuse_dispatcher.a 00:05:19.334 SO libspdk_fuse_dispatcher.so.1.0 00:05:19.593 SYMLINK libspdk_fuse_dispatcher.so 00:05:19.854 LIB libspdk_blob.a 00:05:19.854 SO libspdk_blob.so.11.0 00:05:19.854 SYMLINK libspdk_blob.so 00:05:20.118 CC lib/blobfs/tree.o 00:05:20.118 CC lib/blobfs/blobfs.o 00:05:20.118 CC lib/lvol/lvol.o 00:05:21.053 LIB libspdk_bdev.a 00:05:21.053 SO libspdk_bdev.so.16.0 00:05:21.053 LIB libspdk_blobfs.a 00:05:21.053 SO libspdk_blobfs.so.10.0 00:05:21.053 SYMLINK libspdk_bdev.so 00:05:21.053 SYMLINK libspdk_blobfs.so 00:05:21.053 LIB libspdk_lvol.a 00:05:21.312 SO libspdk_lvol.so.10.0 00:05:21.312 CC lib/scsi/dev.o 00:05:21.312 CC lib/scsi/lun.o 00:05:21.312 CC lib/nbd/nbd.o 00:05:21.312 CC lib/scsi/port.o 00:05:21.312 CC lib/scsi/scsi.o 00:05:21.312 CC lib/scsi/scsi_bdev.o 00:05:21.312 CC lib/ublk/ublk.o 00:05:21.312 CC lib/ftl/ftl_core.o 00:05:21.312 CC lib/nvmf/ctrlr.o 00:05:21.312 SYMLINK libspdk_lvol.so 00:05:21.312 CC lib/ftl/ftl_init.o 00:05:21.312 CC lib/ftl/ftl_layout.o 00:05:21.312 CC lib/ublk/ublk_rpc.o 00:05:21.570 CC lib/nvmf/ctrlr_discovery.o 00:05:21.570 CC lib/scsi/scsi_pr.o 00:05:21.570 CC lib/ftl/ftl_debug.o 00:05:21.570 CC lib/nbd/nbd_rpc.o 00:05:21.570 CC lib/ftl/ftl_io.o 00:05:21.570 CC lib/ftl/ftl_sb.o 00:05:21.829 CC lib/scsi/scsi_rpc.o 00:05:21.829 CC lib/ftl/ftl_l2p.o 00:05:21.829 LIB libspdk_nbd.a 00:05:21.829 CC lib/nvmf/ctrlr_bdev.o 00:05:21.829 SO libspdk_nbd.so.7.0 00:05:21.829 SYMLINK libspdk_nbd.so 00:05:21.829 CC lib/ftl/ftl_l2p_flat.o 00:05:21.829 CC lib/ftl/ftl_nv_cache.o 00:05:21.829 CC lib/nvmf/subsystem.o 00:05:21.829 CC lib/nvmf/nvmf.o 00:05:21.829 CC lib/scsi/task.o 00:05:21.829 LIB libspdk_ublk.a 00:05:22.087 SO libspdk_ublk.so.3.0 00:05:22.087 CC lib/ftl/ftl_band.o 00:05:22.087 SYMLINK libspdk_ublk.so 00:05:22.087 CC lib/nvmf/nvmf_rpc.o 00:05:22.087 CC lib/nvmf/transport.o 00:05:22.087 CC lib/ftl/ftl_band_ops.o 00:05:22.087 LIB libspdk_scsi.a 00:05:22.087 SO libspdk_scsi.so.9.0 00:05:22.344 SYMLINK libspdk_scsi.so 00:05:22.344 CC lib/ftl/ftl_writer.o 00:05:22.344 CC lib/ftl/ftl_rq.o 00:05:22.344 CC lib/ftl/ftl_reloc.o 00:05:22.344 CC lib/ftl/ftl_l2p_cache.o 00:05:22.602 CC lib/ftl/ftl_p2l.o 00:05:22.602 CC lib/ftl/ftl_p2l_log.o 00:05:22.860 CC lib/nvmf/tcp.o 00:05:22.860 CC lib/nvmf/stubs.o 00:05:22.860 CC lib/nvmf/mdns_server.o 00:05:22.860 CC lib/ftl/mngt/ftl_mngt.o 00:05:22.860 CC lib/nvmf/rdma.o 00:05:22.860 CC lib/nvmf/auth.o 00:05:22.860 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:05:23.118 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:05:23.118 CC lib/ftl/mngt/ftl_mngt_startup.o 00:05:23.118 CC lib/ftl/mngt/ftl_mngt_md.o 00:05:23.118 CC lib/ftl/mngt/ftl_mngt_misc.o 00:05:23.118 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:05:23.376 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:05:23.376 CC lib/ftl/mngt/ftl_mngt_band.o 00:05:23.376 CC lib/iscsi/conn.o 00:05:23.376 CC lib/vhost/vhost.o 00:05:23.376 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:05:23.376 CC lib/iscsi/init_grp.o 00:05:23.376 CC lib/iscsi/iscsi.o 00:05:23.634 CC lib/iscsi/param.o 00:05:23.634 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:05:23.634 CC lib/vhost/vhost_rpc.o 00:05:23.634 CC lib/vhost/vhost_scsi.o 00:05:23.892 CC lib/iscsi/portal_grp.o 00:05:23.892 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:05:23.892 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:05:23.892 CC lib/ftl/utils/ftl_conf.o 00:05:24.149 CC lib/vhost/vhost_blk.o 00:05:24.149 CC lib/vhost/rte_vhost_user.o 00:05:24.149 CC lib/ftl/utils/ftl_md.o 00:05:24.149 CC lib/ftl/utils/ftl_mempool.o 00:05:24.407 CC lib/ftl/utils/ftl_bitmap.o 00:05:24.407 CC lib/iscsi/tgt_node.o 00:05:24.407 CC lib/iscsi/iscsi_subsystem.o 00:05:24.407 CC lib/iscsi/iscsi_rpc.o 00:05:24.407 CC lib/iscsi/task.o 00:05:24.665 CC lib/ftl/utils/ftl_property.o 00:05:24.665 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:05:24.665 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:05:24.924 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:05:24.924 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:05:24.924 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:05:24.924 LIB libspdk_nvmf.a 00:05:24.924 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:05:24.924 LIB libspdk_iscsi.a 00:05:24.924 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:05:24.924 SO libspdk_nvmf.so.19.0 00:05:24.924 CC lib/ftl/upgrade/ftl_sb_v3.o 00:05:24.924 SO libspdk_iscsi.so.8.0 00:05:25.182 CC lib/ftl/upgrade/ftl_sb_v5.o 00:05:25.182 CC lib/ftl/nvc/ftl_nvc_dev.o 00:05:25.182 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:05:25.182 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:05:25.182 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:05:25.182 LIB libspdk_vhost.a 00:05:25.182 SYMLINK libspdk_nvmf.so 00:05:25.182 CC lib/ftl/base/ftl_base_dev.o 00:05:25.182 SYMLINK libspdk_iscsi.so 00:05:25.182 CC lib/ftl/base/ftl_base_bdev.o 00:05:25.182 CC lib/ftl/ftl_trace.o 00:05:25.182 SO libspdk_vhost.so.8.0 00:05:25.440 SYMLINK libspdk_vhost.so 00:05:25.440 LIB libspdk_ftl.a 00:05:25.698 SO libspdk_ftl.so.9.0 00:05:25.957 SYMLINK libspdk_ftl.so 00:05:26.215 CC module/env_dpdk/env_dpdk_rpc.o 00:05:26.473 CC module/fsdev/aio/fsdev_aio.o 00:05:26.473 CC module/blob/bdev/blob_bdev.o 00:05:26.473 CC module/keyring/linux/keyring.o 00:05:26.473 CC module/sock/posix/posix.o 00:05:26.473 CC module/keyring/file/keyring.o 00:05:26.473 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:05:26.473 CC module/sock/uring/uring.o 00:05:26.473 CC module/scheduler/dynamic/scheduler_dynamic.o 00:05:26.473 CC module/accel/error/accel_error.o 00:05:26.473 LIB libspdk_env_dpdk_rpc.a 00:05:26.473 SO libspdk_env_dpdk_rpc.so.6.0 00:05:26.473 SYMLINK libspdk_env_dpdk_rpc.so 00:05:26.473 CC module/accel/error/accel_error_rpc.o 00:05:26.473 CC module/keyring/file/keyring_rpc.o 00:05:26.473 CC module/keyring/linux/keyring_rpc.o 00:05:26.473 LIB libspdk_scheduler_dpdk_governor.a 00:05:26.732 CC module/fsdev/aio/fsdev_aio_rpc.o 00:05:26.732 SO libspdk_scheduler_dpdk_governor.so.4.0 00:05:26.732 LIB libspdk_scheduler_dynamic.a 00:05:26.732 LIB libspdk_blob_bdev.a 00:05:26.732 SO libspdk_scheduler_dynamic.so.4.0 00:05:26.732 LIB libspdk_keyring_linux.a 00:05:26.732 LIB libspdk_accel_error.a 00:05:26.732 SYMLINK libspdk_scheduler_dpdk_governor.so 00:05:26.732 LIB libspdk_keyring_file.a 00:05:26.732 SO libspdk_blob_bdev.so.11.0 00:05:26.732 SO libspdk_keyring_linux.so.1.0 00:05:26.732 SO libspdk_accel_error.so.2.0 00:05:26.732 SO libspdk_keyring_file.so.2.0 00:05:26.732 SYMLINK libspdk_scheduler_dynamic.so 00:05:26.732 SYMLINK libspdk_blob_bdev.so 00:05:26.732 SYMLINK libspdk_keyring_file.so 00:05:26.732 SYMLINK libspdk_accel_error.so 00:05:26.732 SYMLINK libspdk_keyring_linux.so 00:05:26.732 CC module/fsdev/aio/linux_aio_mgr.o 00:05:26.991 CC module/scheduler/gscheduler/gscheduler.o 00:05:26.991 CC module/accel/ioat/accel_ioat.o 00:05:26.991 CC module/accel/dsa/accel_dsa.o 00:05:26.991 CC module/accel/iaa/accel_iaa.o 00:05:26.991 CC module/accel/iaa/accel_iaa_rpc.o 00:05:26.991 LIB libspdk_fsdev_aio.a 00:05:26.991 CC module/bdev/delay/vbdev_delay.o 00:05:26.991 CC module/blobfs/bdev/blobfs_bdev.o 00:05:26.991 SO libspdk_fsdev_aio.so.1.0 00:05:26.991 LIB libspdk_scheduler_gscheduler.a 00:05:26.991 LIB libspdk_sock_uring.a 00:05:26.991 SO libspdk_scheduler_gscheduler.so.4.0 00:05:26.991 LIB libspdk_sock_posix.a 00:05:26.991 SO libspdk_sock_uring.so.5.0 00:05:27.250 SYMLINK libspdk_fsdev_aio.so 00:05:27.250 CC module/accel/dsa/accel_dsa_rpc.o 00:05:27.250 SO libspdk_sock_posix.so.6.0 00:05:27.250 CC module/bdev/delay/vbdev_delay_rpc.o 00:05:27.250 SYMLINK libspdk_scheduler_gscheduler.so 00:05:27.250 CC module/accel/ioat/accel_ioat_rpc.o 00:05:27.250 SYMLINK libspdk_sock_uring.so 00:05:27.250 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:05:27.250 LIB libspdk_accel_iaa.a 00:05:27.250 SYMLINK libspdk_sock_posix.so 00:05:27.250 SO libspdk_accel_iaa.so.3.0 00:05:27.250 LIB libspdk_accel_dsa.a 00:05:27.250 SYMLINK libspdk_accel_iaa.so 00:05:27.250 LIB libspdk_accel_ioat.a 00:05:27.250 SO libspdk_accel_dsa.so.5.0 00:05:27.250 SO libspdk_accel_ioat.so.6.0 00:05:27.250 CC module/bdev/error/vbdev_error.o 00:05:27.508 CC module/bdev/error/vbdev_error_rpc.o 00:05:27.508 LIB libspdk_blobfs_bdev.a 00:05:27.508 SYMLINK libspdk_accel_dsa.so 00:05:27.508 CC module/bdev/gpt/gpt.o 00:05:27.508 SYMLINK libspdk_accel_ioat.so 00:05:27.508 SO libspdk_blobfs_bdev.so.6.0 00:05:27.508 LIB libspdk_bdev_delay.a 00:05:27.508 CC module/bdev/malloc/bdev_malloc.o 00:05:27.508 CC module/bdev/lvol/vbdev_lvol.o 00:05:27.508 SO libspdk_bdev_delay.so.6.0 00:05:27.508 CC module/bdev/null/bdev_null.o 00:05:27.508 SYMLINK libspdk_blobfs_bdev.so 00:05:27.508 SYMLINK libspdk_bdev_delay.so 00:05:27.508 CC module/bdev/malloc/bdev_malloc_rpc.o 00:05:27.508 CC module/bdev/nvme/bdev_nvme.o 00:05:27.508 CC module/bdev/nvme/bdev_nvme_rpc.o 00:05:27.508 CC module/bdev/passthru/vbdev_passthru.o 00:05:27.508 CC module/bdev/gpt/vbdev_gpt.o 00:05:27.508 LIB libspdk_bdev_error.a 00:05:27.767 SO libspdk_bdev_error.so.6.0 00:05:27.767 CC module/bdev/raid/bdev_raid.o 00:05:27.767 CC module/bdev/raid/bdev_raid_rpc.o 00:05:27.767 SYMLINK libspdk_bdev_error.so 00:05:27.767 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:05:27.767 CC module/bdev/null/bdev_null_rpc.o 00:05:27.767 LIB libspdk_bdev_malloc.a 00:05:27.767 SO libspdk_bdev_malloc.so.6.0 00:05:27.767 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:05:27.767 LIB libspdk_bdev_gpt.a 00:05:28.026 SYMLINK libspdk_bdev_malloc.so 00:05:28.026 CC module/bdev/nvme/nvme_rpc.o 00:05:28.026 SO libspdk_bdev_gpt.so.6.0 00:05:28.026 LIB libspdk_bdev_null.a 00:05:28.026 SO libspdk_bdev_null.so.6.0 00:05:28.026 SYMLINK libspdk_bdev_gpt.so 00:05:28.026 SYMLINK libspdk_bdev_null.so 00:05:28.026 LIB libspdk_bdev_passthru.a 00:05:28.026 SO libspdk_bdev_passthru.so.6.0 00:05:28.026 LIB libspdk_bdev_lvol.a 00:05:28.026 CC module/bdev/split/vbdev_split.o 00:05:28.026 CC module/bdev/zone_block/vbdev_zone_block.o 00:05:28.026 SO libspdk_bdev_lvol.so.6.0 00:05:28.285 CC module/bdev/uring/bdev_uring.o 00:05:28.285 SYMLINK libspdk_bdev_passthru.so 00:05:28.285 CC module/bdev/nvme/bdev_mdns_client.o 00:05:28.285 CC module/bdev/nvme/vbdev_opal.o 00:05:28.285 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:05:28.285 SYMLINK libspdk_bdev_lvol.so 00:05:28.285 CC module/bdev/uring/bdev_uring_rpc.o 00:05:28.285 CC module/bdev/aio/bdev_aio.o 00:05:28.285 CC module/bdev/nvme/vbdev_opal_rpc.o 00:05:28.285 CC module/bdev/split/vbdev_split_rpc.o 00:05:28.285 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:05:28.285 CC module/bdev/aio/bdev_aio_rpc.o 00:05:28.544 CC module/bdev/raid/bdev_raid_sb.o 00:05:28.545 LIB libspdk_bdev_zone_block.a 00:05:28.545 LIB libspdk_bdev_uring.a 00:05:28.545 SO libspdk_bdev_zone_block.so.6.0 00:05:28.545 LIB libspdk_bdev_split.a 00:05:28.545 SO libspdk_bdev_uring.so.6.0 00:05:28.545 CC module/bdev/raid/raid0.o 00:05:28.545 SO libspdk_bdev_split.so.6.0 00:05:28.545 LIB libspdk_bdev_aio.a 00:05:28.545 SYMLINK libspdk_bdev_zone_block.so 00:05:28.545 SO libspdk_bdev_aio.so.6.0 00:05:28.545 SYMLINK libspdk_bdev_split.so 00:05:28.545 SYMLINK libspdk_bdev_uring.so 00:05:28.804 CC module/bdev/raid/raid1.o 00:05:28.804 CC module/bdev/raid/concat.o 00:05:28.804 SYMLINK libspdk_bdev_aio.so 00:05:28.804 CC module/bdev/virtio/bdev_virtio_scsi.o 00:05:28.804 CC module/bdev/virtio/bdev_virtio_blk.o 00:05:28.804 CC module/bdev/ftl/bdev_ftl.o 00:05:28.804 CC module/bdev/ftl/bdev_ftl_rpc.o 00:05:28.804 CC module/bdev/virtio/bdev_virtio_rpc.o 00:05:28.804 CC module/bdev/iscsi/bdev_iscsi.o 00:05:28.804 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:05:29.063 LIB libspdk_bdev_raid.a 00:05:29.063 SO libspdk_bdev_raid.so.6.0 00:05:29.063 SYMLINK libspdk_bdev_raid.so 00:05:29.063 LIB libspdk_bdev_ftl.a 00:05:29.063 SO libspdk_bdev_ftl.so.6.0 00:05:29.063 LIB libspdk_bdev_iscsi.a 00:05:29.322 SYMLINK libspdk_bdev_ftl.so 00:05:29.322 SO libspdk_bdev_iscsi.so.6.0 00:05:29.322 SYMLINK libspdk_bdev_iscsi.so 00:05:29.322 LIB libspdk_bdev_virtio.a 00:05:29.322 SO libspdk_bdev_virtio.so.6.0 00:05:29.581 SYMLINK libspdk_bdev_virtio.so 00:05:29.840 LIB libspdk_bdev_nvme.a 00:05:29.840 SO libspdk_bdev_nvme.so.7.0 00:05:30.100 SYMLINK libspdk_bdev_nvme.so 00:05:30.359 CC module/event/subsystems/fsdev/fsdev.o 00:05:30.359 CC module/event/subsystems/iobuf/iobuf.o 00:05:30.359 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:05:30.359 CC module/event/subsystems/vmd/vmd.o 00:05:30.359 CC module/event/subsystems/sock/sock.o 00:05:30.359 CC module/event/subsystems/vmd/vmd_rpc.o 00:05:30.359 CC module/event/subsystems/keyring/keyring.o 00:05:30.359 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:05:30.359 CC module/event/subsystems/scheduler/scheduler.o 00:05:30.618 LIB libspdk_event_fsdev.a 00:05:30.618 LIB libspdk_event_keyring.a 00:05:30.618 LIB libspdk_event_vhost_blk.a 00:05:30.618 SO libspdk_event_fsdev.so.1.0 00:05:30.618 LIB libspdk_event_vmd.a 00:05:30.618 SO libspdk_event_keyring.so.1.0 00:05:30.618 LIB libspdk_event_iobuf.a 00:05:30.618 LIB libspdk_event_scheduler.a 00:05:30.618 LIB libspdk_event_sock.a 00:05:30.618 SO libspdk_event_vhost_blk.so.3.0 00:05:30.618 SO libspdk_event_sock.so.5.0 00:05:30.618 SO libspdk_event_scheduler.so.4.0 00:05:30.618 SO libspdk_event_vmd.so.6.0 00:05:30.618 SO libspdk_event_iobuf.so.3.0 00:05:30.618 SYMLINK libspdk_event_keyring.so 00:05:30.618 SYMLINK libspdk_event_fsdev.so 00:05:30.618 SYMLINK libspdk_event_vhost_blk.so 00:05:30.618 SYMLINK libspdk_event_sock.so 00:05:30.618 SYMLINK libspdk_event_scheduler.so 00:05:30.618 SYMLINK libspdk_event_vmd.so 00:05:30.618 SYMLINK libspdk_event_iobuf.so 00:05:30.878 CC module/event/subsystems/accel/accel.o 00:05:31.136 LIB libspdk_event_accel.a 00:05:31.136 SO libspdk_event_accel.so.6.0 00:05:31.136 SYMLINK libspdk_event_accel.so 00:05:31.395 CC module/event/subsystems/bdev/bdev.o 00:05:31.654 LIB libspdk_event_bdev.a 00:05:31.654 SO libspdk_event_bdev.so.6.0 00:05:31.913 SYMLINK libspdk_event_bdev.so 00:05:31.913 CC module/event/subsystems/scsi/scsi.o 00:05:31.913 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:05:31.913 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:05:31.913 CC module/event/subsystems/ublk/ublk.o 00:05:31.913 CC module/event/subsystems/nbd/nbd.o 00:05:32.173 LIB libspdk_event_ublk.a 00:05:32.173 LIB libspdk_event_nbd.a 00:05:32.173 LIB libspdk_event_scsi.a 00:05:32.173 SO libspdk_event_ublk.so.3.0 00:05:32.173 SO libspdk_event_nbd.so.6.0 00:05:32.173 SO libspdk_event_scsi.so.6.0 00:05:32.173 SYMLINK libspdk_event_ublk.so 00:05:32.173 SYMLINK libspdk_event_nbd.so 00:05:32.432 SYMLINK libspdk_event_scsi.so 00:05:32.432 LIB libspdk_event_nvmf.a 00:05:32.432 SO libspdk_event_nvmf.so.6.0 00:05:32.432 SYMLINK libspdk_event_nvmf.so 00:05:32.432 CC module/event/subsystems/iscsi/iscsi.o 00:05:32.432 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:05:32.705 LIB libspdk_event_vhost_scsi.a 00:05:32.705 LIB libspdk_event_iscsi.a 00:05:32.705 SO libspdk_event_vhost_scsi.so.3.0 00:05:32.705 SO libspdk_event_iscsi.so.6.0 00:05:32.974 SYMLINK libspdk_event_vhost_scsi.so 00:05:32.974 SYMLINK libspdk_event_iscsi.so 00:05:32.974 SO libspdk.so.6.0 00:05:32.974 SYMLINK libspdk.so 00:05:33.232 CC app/trace_record/trace_record.o 00:05:33.232 CC app/spdk_lspci/spdk_lspci.o 00:05:33.232 CXX app/trace/trace.o 00:05:33.232 CC app/spdk_nvme_perf/perf.o 00:05:33.232 CC app/spdk_nvme_identify/identify.o 00:05:33.232 CC app/iscsi_tgt/iscsi_tgt.o 00:05:33.232 CC app/nvmf_tgt/nvmf_main.o 00:05:33.491 CC app/spdk_tgt/spdk_tgt.o 00:05:33.491 CC test/thread/poller_perf/poller_perf.o 00:05:33.491 CC examples/util/zipf/zipf.o 00:05:33.491 LINK spdk_lspci 00:05:33.491 LINK poller_perf 00:05:33.491 LINK zipf 00:05:33.491 LINK nvmf_tgt 00:05:33.491 LINK iscsi_tgt 00:05:33.749 LINK spdk_trace_record 00:05:33.749 LINK spdk_tgt 00:05:33.749 LINK spdk_trace 00:05:34.008 CC test/dma/test_dma/test_dma.o 00:05:34.008 CC app/spdk_nvme_discover/discovery_aer.o 00:05:34.008 CC app/spdk_top/spdk_top.o 00:05:34.008 CC examples/ioat/perf/perf.o 00:05:34.008 CC test/app/bdev_svc/bdev_svc.o 00:05:34.008 CC examples/idxd/perf/perf.o 00:05:34.008 CC examples/vmd/lsvmd/lsvmd.o 00:05:34.008 CC examples/vmd/led/led.o 00:05:34.266 LINK spdk_nvme_discover 00:05:34.266 LINK lsvmd 00:05:34.266 LINK spdk_nvme_identify 00:05:34.266 LINK bdev_svc 00:05:34.266 LINK led 00:05:34.266 LINK ioat_perf 00:05:34.266 LINK spdk_nvme_perf 00:05:34.266 LINK idxd_perf 00:05:34.525 TEST_HEADER include/spdk/accel.h 00:05:34.525 TEST_HEADER include/spdk/accel_module.h 00:05:34.525 TEST_HEADER include/spdk/assert.h 00:05:34.525 TEST_HEADER include/spdk/barrier.h 00:05:34.525 TEST_HEADER include/spdk/base64.h 00:05:34.525 TEST_HEADER include/spdk/bdev.h 00:05:34.525 TEST_HEADER include/spdk/bdev_module.h 00:05:34.525 TEST_HEADER include/spdk/bdev_zone.h 00:05:34.525 TEST_HEADER include/spdk/bit_array.h 00:05:34.525 TEST_HEADER include/spdk/bit_pool.h 00:05:34.525 TEST_HEADER include/spdk/blob_bdev.h 00:05:34.525 TEST_HEADER include/spdk/blobfs_bdev.h 00:05:34.525 TEST_HEADER include/spdk/blobfs.h 00:05:34.525 TEST_HEADER include/spdk/blob.h 00:05:34.525 TEST_HEADER include/spdk/conf.h 00:05:34.525 TEST_HEADER include/spdk/config.h 00:05:34.525 TEST_HEADER include/spdk/cpuset.h 00:05:34.525 LINK test_dma 00:05:34.525 TEST_HEADER include/spdk/crc16.h 00:05:34.525 TEST_HEADER include/spdk/crc32.h 00:05:34.525 TEST_HEADER include/spdk/crc64.h 00:05:34.525 TEST_HEADER include/spdk/dif.h 00:05:34.525 TEST_HEADER include/spdk/dma.h 00:05:34.525 TEST_HEADER include/spdk/endian.h 00:05:34.525 TEST_HEADER include/spdk/env_dpdk.h 00:05:34.525 TEST_HEADER include/spdk/env.h 00:05:34.525 TEST_HEADER include/spdk/event.h 00:05:34.525 TEST_HEADER include/spdk/fd_group.h 00:05:34.525 CC examples/ioat/verify/verify.o 00:05:34.525 TEST_HEADER include/spdk/fd.h 00:05:34.525 TEST_HEADER include/spdk/file.h 00:05:34.525 TEST_HEADER include/spdk/fsdev.h 00:05:34.525 TEST_HEADER include/spdk/fsdev_module.h 00:05:34.525 TEST_HEADER include/spdk/ftl.h 00:05:34.525 TEST_HEADER include/spdk/fuse_dispatcher.h 00:05:34.525 TEST_HEADER include/spdk/gpt_spec.h 00:05:34.525 TEST_HEADER include/spdk/hexlify.h 00:05:34.525 CC test/rpc_client/rpc_client_test.o 00:05:34.525 TEST_HEADER include/spdk/histogram_data.h 00:05:34.525 TEST_HEADER include/spdk/idxd.h 00:05:34.525 TEST_HEADER include/spdk/idxd_spec.h 00:05:34.525 TEST_HEADER include/spdk/init.h 00:05:34.525 TEST_HEADER include/spdk/ioat.h 00:05:34.525 TEST_HEADER include/spdk/ioat_spec.h 00:05:34.525 TEST_HEADER include/spdk/iscsi_spec.h 00:05:34.525 TEST_HEADER include/spdk/json.h 00:05:34.525 TEST_HEADER include/spdk/jsonrpc.h 00:05:34.525 TEST_HEADER include/spdk/keyring.h 00:05:34.525 TEST_HEADER include/spdk/keyring_module.h 00:05:34.525 TEST_HEADER include/spdk/likely.h 00:05:34.525 TEST_HEADER include/spdk/log.h 00:05:34.525 TEST_HEADER include/spdk/lvol.h 00:05:34.525 TEST_HEADER include/spdk/md5.h 00:05:34.525 CC test/event/event_perf/event_perf.o 00:05:34.525 TEST_HEADER include/spdk/memory.h 00:05:34.525 TEST_HEADER include/spdk/mmio.h 00:05:34.525 TEST_HEADER include/spdk/nbd.h 00:05:34.525 TEST_HEADER include/spdk/net.h 00:05:34.525 TEST_HEADER include/spdk/notify.h 00:05:34.525 TEST_HEADER include/spdk/nvme.h 00:05:34.525 TEST_HEADER include/spdk/nvme_intel.h 00:05:34.525 TEST_HEADER include/spdk/nvme_ocssd.h 00:05:34.525 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:05:34.525 TEST_HEADER include/spdk/nvme_spec.h 00:05:34.525 TEST_HEADER include/spdk/nvme_zns.h 00:05:34.525 TEST_HEADER include/spdk/nvmf_cmd.h 00:05:34.525 CC app/spdk_dd/spdk_dd.o 00:05:34.525 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:05:34.525 TEST_HEADER include/spdk/nvmf.h 00:05:34.525 TEST_HEADER include/spdk/nvmf_spec.h 00:05:34.525 TEST_HEADER include/spdk/nvmf_transport.h 00:05:34.525 TEST_HEADER include/spdk/opal.h 00:05:34.525 TEST_HEADER include/spdk/opal_spec.h 00:05:34.525 TEST_HEADER include/spdk/pci_ids.h 00:05:34.525 TEST_HEADER include/spdk/pipe.h 00:05:34.525 TEST_HEADER include/spdk/queue.h 00:05:34.525 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:05:34.525 TEST_HEADER include/spdk/reduce.h 00:05:34.525 TEST_HEADER include/spdk/rpc.h 00:05:34.525 TEST_HEADER include/spdk/scheduler.h 00:05:34.525 TEST_HEADER include/spdk/scsi.h 00:05:34.525 TEST_HEADER include/spdk/scsi_spec.h 00:05:34.525 TEST_HEADER include/spdk/sock.h 00:05:34.525 TEST_HEADER include/spdk/stdinc.h 00:05:34.525 TEST_HEADER include/spdk/string.h 00:05:34.525 TEST_HEADER include/spdk/thread.h 00:05:34.525 TEST_HEADER include/spdk/trace.h 00:05:34.525 TEST_HEADER include/spdk/trace_parser.h 00:05:34.525 CC test/env/mem_callbacks/mem_callbacks.o 00:05:34.525 TEST_HEADER include/spdk/tree.h 00:05:34.525 TEST_HEADER include/spdk/ublk.h 00:05:34.525 TEST_HEADER include/spdk/util.h 00:05:34.525 TEST_HEADER include/spdk/uuid.h 00:05:34.525 TEST_HEADER include/spdk/version.h 00:05:34.525 TEST_HEADER include/spdk/vfio_user_pci.h 00:05:34.525 TEST_HEADER include/spdk/vfio_user_spec.h 00:05:34.525 TEST_HEADER include/spdk/vhost.h 00:05:34.525 TEST_HEADER include/spdk/vmd.h 00:05:34.525 TEST_HEADER include/spdk/xor.h 00:05:34.525 TEST_HEADER include/spdk/zipf.h 00:05:34.525 CXX test/cpp_headers/accel.o 00:05:34.525 CC examples/interrupt_tgt/interrupt_tgt.o 00:05:34.784 CXX test/cpp_headers/accel_module.o 00:05:34.784 LINK event_perf 00:05:34.784 LINK rpc_client_test 00:05:34.784 LINK verify 00:05:34.784 LINK mem_callbacks 00:05:34.784 CXX test/cpp_headers/assert.o 00:05:34.784 LINK interrupt_tgt 00:05:34.784 LINK spdk_top 00:05:34.784 CC test/event/reactor/reactor.o 00:05:35.043 CC test/app/histogram_perf/histogram_perf.o 00:05:35.043 CC test/app/jsoncat/jsoncat.o 00:05:35.043 CC test/env/vtophys/vtophys.o 00:05:35.043 LINK nvme_fuzz 00:05:35.043 CXX test/cpp_headers/barrier.o 00:05:35.043 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:05:35.043 LINK spdk_dd 00:05:35.043 LINK reactor 00:05:35.043 LINK histogram_perf 00:05:35.043 LINK jsoncat 00:05:35.043 LINK vtophys 00:05:35.301 CXX test/cpp_headers/base64.o 00:05:35.301 LINK env_dpdk_post_init 00:05:35.301 CC examples/sock/hello_world/hello_sock.o 00:05:35.301 CC examples/thread/thread/thread_ex.o 00:05:35.301 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:05:35.301 CC test/event/reactor_perf/reactor_perf.o 00:05:35.301 CC test/app/stub/stub.o 00:05:35.301 CXX test/cpp_headers/bdev.o 00:05:35.301 CC test/env/memory/memory_ut.o 00:05:35.301 CC app/fio/nvme/fio_plugin.o 00:05:35.301 CC app/vhost/vhost.o 00:05:35.560 LINK hello_sock 00:05:35.560 CC app/fio/bdev/fio_plugin.o 00:05:35.560 LINK reactor_perf 00:05:35.560 LINK thread 00:05:35.560 CXX test/cpp_headers/bdev_module.o 00:05:35.560 LINK stub 00:05:35.560 LINK vhost 00:05:35.560 CC test/env/pci/pci_ut.o 00:05:35.818 CC test/event/app_repeat/app_repeat.o 00:05:35.818 CXX test/cpp_headers/bdev_zone.o 00:05:35.818 CC examples/nvme/hello_world/hello_world.o 00:05:35.818 LINK app_repeat 00:05:35.818 CC examples/nvme/reconnect/reconnect.o 00:05:35.818 CC examples/fsdev/hello_world/hello_fsdev.o 00:05:36.076 LINK spdk_nvme 00:05:36.076 CXX test/cpp_headers/bit_array.o 00:05:36.076 LINK spdk_bdev 00:05:36.076 CXX test/cpp_headers/bit_pool.o 00:05:36.076 CXX test/cpp_headers/blob_bdev.o 00:05:36.076 LINK hello_world 00:05:36.076 LINK pci_ut 00:05:36.076 LINK memory_ut 00:05:36.076 CC test/event/scheduler/scheduler.o 00:05:36.334 LINK hello_fsdev 00:05:36.334 LINK reconnect 00:05:36.334 CXX test/cpp_headers/blobfs_bdev.o 00:05:36.334 CXX test/cpp_headers/blobfs.o 00:05:36.334 CC examples/accel/perf/accel_perf.o 00:05:36.334 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:05:36.334 LINK scheduler 00:05:36.593 CXX test/cpp_headers/blob.o 00:05:36.593 CC test/accel/dif/dif.o 00:05:36.593 CC examples/nvme/nvme_manage/nvme_manage.o 00:05:36.593 CC examples/nvme/arbitration/arbitration.o 00:05:36.593 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:05:36.593 CC test/blobfs/mkfs/mkfs.o 00:05:36.593 CXX test/cpp_headers/conf.o 00:05:36.851 CC test/lvol/esnap/esnap.o 00:05:36.851 CC test/nvme/aer/aer.o 00:05:36.851 LINK mkfs 00:05:36.851 CXX test/cpp_headers/config.o 00:05:36.851 LINK arbitration 00:05:36.851 LINK accel_perf 00:05:36.851 CXX test/cpp_headers/cpuset.o 00:05:36.851 LINK iscsi_fuzz 00:05:36.851 LINK vhost_fuzz 00:05:37.109 CXX test/cpp_headers/crc16.o 00:05:37.109 LINK nvme_manage 00:05:37.109 CXX test/cpp_headers/crc32.o 00:05:37.109 LINK aer 00:05:37.109 LINK dif 00:05:37.109 CXX test/cpp_headers/crc64.o 00:05:37.109 CC examples/nvme/hotplug/hotplug.o 00:05:37.367 CC test/nvme/reset/reset.o 00:05:37.367 CC examples/blob/cli/blobcli.o 00:05:37.367 CC examples/nvme/cmb_copy/cmb_copy.o 00:05:37.367 CC examples/bdev/hello_world/hello_bdev.o 00:05:37.367 CC examples/blob/hello_world/hello_blob.o 00:05:37.367 CC examples/bdev/bdevperf/bdevperf.o 00:05:37.367 CXX test/cpp_headers/dif.o 00:05:37.367 CC examples/nvme/abort/abort.o 00:05:37.367 LINK hotplug 00:05:37.625 LINK cmb_copy 00:05:37.625 LINK hello_blob 00:05:37.625 LINK reset 00:05:37.625 LINK hello_bdev 00:05:37.625 CXX test/cpp_headers/dma.o 00:05:37.625 CXX test/cpp_headers/endian.o 00:05:37.625 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:05:37.883 CXX test/cpp_headers/env_dpdk.o 00:05:37.884 CXX test/cpp_headers/env.o 00:05:37.884 LINK blobcli 00:05:37.884 CC test/nvme/sgl/sgl.o 00:05:37.884 CXX test/cpp_headers/event.o 00:05:37.884 LINK abort 00:05:37.884 LINK pmr_persistence 00:05:37.884 CXX test/cpp_headers/fd_group.o 00:05:37.884 CC test/bdev/bdevio/bdevio.o 00:05:38.142 CC test/nvme/e2edp/nvme_dp.o 00:05:38.142 CC test/nvme/overhead/overhead.o 00:05:38.142 CC test/nvme/err_injection/err_injection.o 00:05:38.142 CC test/nvme/startup/startup.o 00:05:38.142 LINK sgl 00:05:38.142 CXX test/cpp_headers/fd.o 00:05:38.142 CC test/nvme/reserve/reserve.o 00:05:38.142 LINK bdevperf 00:05:38.400 CXX test/cpp_headers/file.o 00:05:38.400 LINK startup 00:05:38.400 LINK err_injection 00:05:38.400 LINK nvme_dp 00:05:38.400 LINK bdevio 00:05:38.400 CC test/nvme/simple_copy/simple_copy.o 00:05:38.400 LINK overhead 00:05:38.400 LINK reserve 00:05:38.400 CXX test/cpp_headers/fsdev.o 00:05:38.658 CC test/nvme/boot_partition/boot_partition.o 00:05:38.658 CC test/nvme/compliance/nvme_compliance.o 00:05:38.658 CC test/nvme/connect_stress/connect_stress.o 00:05:38.658 LINK simple_copy 00:05:38.658 CC test/nvme/fused_ordering/fused_ordering.o 00:05:38.658 CC test/nvme/doorbell_aers/doorbell_aers.o 00:05:38.658 CXX test/cpp_headers/fsdev_module.o 00:05:38.658 CC test/nvme/fdp/fdp.o 00:05:38.658 CC examples/nvmf/nvmf/nvmf.o 00:05:38.658 LINK boot_partition 00:05:38.658 LINK connect_stress 00:05:38.916 CC test/nvme/cuse/cuse.o 00:05:38.916 CXX test/cpp_headers/ftl.o 00:05:38.916 LINK doorbell_aers 00:05:38.916 LINK fused_ordering 00:05:38.916 LINK nvme_compliance 00:05:38.916 CXX test/cpp_headers/fuse_dispatcher.o 00:05:38.916 CXX test/cpp_headers/gpt_spec.o 00:05:38.916 LINK nvmf 00:05:38.916 LINK fdp 00:05:38.916 CXX test/cpp_headers/hexlify.o 00:05:38.916 CXX test/cpp_headers/histogram_data.o 00:05:39.174 CXX test/cpp_headers/idxd.o 00:05:39.174 CXX test/cpp_headers/idxd_spec.o 00:05:39.174 CXX test/cpp_headers/init.o 00:05:39.174 CXX test/cpp_headers/ioat.o 00:05:39.174 CXX test/cpp_headers/ioat_spec.o 00:05:39.174 CXX test/cpp_headers/iscsi_spec.o 00:05:39.174 CXX test/cpp_headers/json.o 00:05:39.174 CXX test/cpp_headers/jsonrpc.o 00:05:39.174 CXX test/cpp_headers/keyring.o 00:05:39.174 CXX test/cpp_headers/keyring_module.o 00:05:39.174 CXX test/cpp_headers/likely.o 00:05:39.174 CXX test/cpp_headers/log.o 00:05:39.431 CXX test/cpp_headers/lvol.o 00:05:39.431 CXX test/cpp_headers/md5.o 00:05:39.431 CXX test/cpp_headers/memory.o 00:05:39.431 CXX test/cpp_headers/mmio.o 00:05:39.431 CXX test/cpp_headers/nbd.o 00:05:39.431 CXX test/cpp_headers/net.o 00:05:39.431 CXX test/cpp_headers/notify.o 00:05:39.431 CXX test/cpp_headers/nvme.o 00:05:39.431 CXX test/cpp_headers/nvme_intel.o 00:05:39.431 CXX test/cpp_headers/nvme_ocssd.o 00:05:39.431 CXX test/cpp_headers/nvme_ocssd_spec.o 00:05:39.431 CXX test/cpp_headers/nvme_spec.o 00:05:39.689 CXX test/cpp_headers/nvme_zns.o 00:05:39.689 CXX test/cpp_headers/nvmf_cmd.o 00:05:39.689 CXX test/cpp_headers/nvmf_fc_spec.o 00:05:39.689 CXX test/cpp_headers/nvmf.o 00:05:39.689 CXX test/cpp_headers/nvmf_spec.o 00:05:39.689 CXX test/cpp_headers/nvmf_transport.o 00:05:39.689 CXX test/cpp_headers/opal.o 00:05:39.689 CXX test/cpp_headers/opal_spec.o 00:05:39.689 CXX test/cpp_headers/pci_ids.o 00:05:39.689 CXX test/cpp_headers/pipe.o 00:05:39.689 CXX test/cpp_headers/queue.o 00:05:39.689 CXX test/cpp_headers/reduce.o 00:05:39.948 CXX test/cpp_headers/rpc.o 00:05:39.948 CXX test/cpp_headers/scheduler.o 00:05:39.948 CXX test/cpp_headers/scsi.o 00:05:39.948 CXX test/cpp_headers/scsi_spec.o 00:05:39.948 CXX test/cpp_headers/sock.o 00:05:39.948 CXX test/cpp_headers/stdinc.o 00:05:39.948 CXX test/cpp_headers/string.o 00:05:39.948 CXX test/cpp_headers/thread.o 00:05:39.948 CXX test/cpp_headers/trace.o 00:05:39.948 CXX test/cpp_headers/trace_parser.o 00:05:40.207 CXX test/cpp_headers/tree.o 00:05:40.207 CXX test/cpp_headers/ublk.o 00:05:40.207 CXX test/cpp_headers/util.o 00:05:40.207 CXX test/cpp_headers/uuid.o 00:05:40.207 CXX test/cpp_headers/version.o 00:05:40.207 CXX test/cpp_headers/vfio_user_pci.o 00:05:40.207 LINK cuse 00:05:40.207 CXX test/cpp_headers/vfio_user_spec.o 00:05:40.207 CXX test/cpp_headers/vhost.o 00:05:40.207 CXX test/cpp_headers/vmd.o 00:05:40.207 CXX test/cpp_headers/xor.o 00:05:40.207 CXX test/cpp_headers/zipf.o 00:05:41.583 LINK esnap 00:05:42.150 00:05:42.150 real 1m27.115s 00:05:42.150 user 7m1.269s 00:05:42.150 sys 1m7.540s 00:05:42.150 05:58:07 make -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:05:42.150 05:58:07 make -- common/autotest_common.sh@10 -- $ set +x 00:05:42.150 ************************************ 00:05:42.150 END TEST make 00:05:42.150 ************************************ 00:05:42.150 05:58:07 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:05:42.150 05:58:07 -- pm/common@29 -- $ signal_monitor_resources TERM 00:05:42.150 05:58:07 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:05:42.150 05:58:07 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:42.150 05:58:07 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:05:42.150 05:58:07 -- pm/common@44 -- $ pid=6030 00:05:42.150 05:58:07 -- pm/common@50 -- $ kill -TERM 6030 00:05:42.150 05:58:07 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:42.150 05:58:07 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:05:42.150 05:58:07 -- pm/common@44 -- $ pid=6032 00:05:42.151 05:58:07 -- pm/common@50 -- $ kill -TERM 6032 00:05:42.151 05:58:07 -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:42.151 05:58:07 -- common/autotest_common.sh@1681 -- # lcov --version 00:05:42.151 05:58:07 -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:42.151 05:58:07 -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:42.151 05:58:07 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:42.151 05:58:07 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:42.151 05:58:07 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:42.151 05:58:07 -- scripts/common.sh@336 -- # IFS=.-: 00:05:42.151 05:58:07 -- scripts/common.sh@336 -- # read -ra ver1 00:05:42.151 05:58:07 -- scripts/common.sh@337 -- # IFS=.-: 00:05:42.151 05:58:07 -- scripts/common.sh@337 -- # read -ra ver2 00:05:42.151 05:58:07 -- scripts/common.sh@338 -- # local 'op=<' 00:05:42.151 05:58:07 -- scripts/common.sh@340 -- # ver1_l=2 00:05:42.151 05:58:07 -- scripts/common.sh@341 -- # ver2_l=1 00:05:42.151 05:58:07 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:42.151 05:58:07 -- scripts/common.sh@344 -- # case "$op" in 00:05:42.151 05:58:07 -- scripts/common.sh@345 -- # : 1 00:05:42.151 05:58:07 -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:42.151 05:58:07 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:42.151 05:58:07 -- scripts/common.sh@365 -- # decimal 1 00:05:42.151 05:58:07 -- scripts/common.sh@353 -- # local d=1 00:05:42.151 05:58:07 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:42.151 05:58:07 -- scripts/common.sh@355 -- # echo 1 00:05:42.151 05:58:07 -- scripts/common.sh@365 -- # ver1[v]=1 00:05:42.151 05:58:07 -- scripts/common.sh@366 -- # decimal 2 00:05:42.151 05:58:07 -- scripts/common.sh@353 -- # local d=2 00:05:42.151 05:58:07 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:42.151 05:58:07 -- scripts/common.sh@355 -- # echo 2 00:05:42.151 05:58:07 -- scripts/common.sh@366 -- # ver2[v]=2 00:05:42.151 05:58:07 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:42.151 05:58:07 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:42.151 05:58:07 -- scripts/common.sh@368 -- # return 0 00:05:42.151 05:58:07 -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:42.151 05:58:07 -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:42.151 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:42.151 --rc genhtml_branch_coverage=1 00:05:42.151 --rc genhtml_function_coverage=1 00:05:42.151 --rc genhtml_legend=1 00:05:42.151 --rc geninfo_all_blocks=1 00:05:42.151 --rc geninfo_unexecuted_blocks=1 00:05:42.151 00:05:42.151 ' 00:05:42.151 05:58:07 -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:42.151 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:42.151 --rc genhtml_branch_coverage=1 00:05:42.151 --rc genhtml_function_coverage=1 00:05:42.151 --rc genhtml_legend=1 00:05:42.151 --rc geninfo_all_blocks=1 00:05:42.151 --rc geninfo_unexecuted_blocks=1 00:05:42.151 00:05:42.151 ' 00:05:42.151 05:58:07 -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:42.151 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:42.151 --rc genhtml_branch_coverage=1 00:05:42.151 --rc genhtml_function_coverage=1 00:05:42.151 --rc genhtml_legend=1 00:05:42.151 --rc geninfo_all_blocks=1 00:05:42.151 --rc geninfo_unexecuted_blocks=1 00:05:42.151 00:05:42.151 ' 00:05:42.151 05:58:07 -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:42.151 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:42.151 --rc genhtml_branch_coverage=1 00:05:42.151 --rc genhtml_function_coverage=1 00:05:42.151 --rc genhtml_legend=1 00:05:42.151 --rc geninfo_all_blocks=1 00:05:42.151 --rc geninfo_unexecuted_blocks=1 00:05:42.151 00:05:42.151 ' 00:05:42.151 05:58:07 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:42.151 05:58:07 -- nvmf/common.sh@7 -- # uname -s 00:05:42.151 05:58:07 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:42.151 05:58:07 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:42.151 05:58:07 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:42.151 05:58:07 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:42.151 05:58:07 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:42.151 05:58:07 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:42.151 05:58:07 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:42.151 05:58:07 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:42.151 05:58:07 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:42.151 05:58:07 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:42.151 05:58:07 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 00:05:42.151 05:58:07 -- nvmf/common.sh@18 -- # NVME_HOSTID=a979a798-a221-4879-b3c4-5aaa753fde06 00:05:42.151 05:58:07 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:42.151 05:58:07 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:42.151 05:58:07 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:05:42.151 05:58:07 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:42.151 05:58:07 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:42.151 05:58:07 -- scripts/common.sh@15 -- # shopt -s extglob 00:05:42.151 05:58:07 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:42.151 05:58:07 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:42.151 05:58:07 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:42.151 05:58:07 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:42.151 05:58:07 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:42.151 05:58:07 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:42.151 05:58:07 -- paths/export.sh@5 -- # export PATH 00:05:42.151 05:58:07 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:42.151 05:58:07 -- nvmf/common.sh@51 -- # : 0 00:05:42.151 05:58:07 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:42.151 05:58:07 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:42.151 05:58:07 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:42.151 05:58:07 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:42.151 05:58:07 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:42.151 05:58:07 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:42.151 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:42.151 05:58:07 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:42.151 05:58:07 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:42.151 05:58:07 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:42.151 05:58:07 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:05:42.151 05:58:07 -- spdk/autotest.sh@32 -- # uname -s 00:05:42.151 05:58:07 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:05:42.151 05:58:07 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:05:42.151 05:58:07 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:05:42.151 05:58:07 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:05:42.151 05:58:07 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:05:42.151 05:58:07 -- spdk/autotest.sh@44 -- # modprobe nbd 00:05:42.410 05:58:07 -- spdk/autotest.sh@46 -- # type -P udevadm 00:05:42.410 05:58:07 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:05:42.410 05:58:07 -- spdk/autotest.sh@48 -- # udevadm_pid=66607 00:05:42.410 05:58:07 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:05:42.410 05:58:07 -- pm/common@17 -- # local monitor 00:05:42.410 05:58:07 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:05:42.410 05:58:07 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:42.410 05:58:07 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:42.410 05:58:07 -- pm/common@25 -- # sleep 1 00:05:42.410 05:58:07 -- pm/common@21 -- # date +%s 00:05:42.410 05:58:07 -- pm/common@21 -- # date +%s 00:05:42.410 05:58:07 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1727762287 00:05:42.410 05:58:07 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1727762287 00:05:42.410 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1727762287_collect-vmstat.pm.log 00:05:42.410 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1727762287_collect-cpu-load.pm.log 00:05:43.345 05:58:08 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:05:43.345 05:58:08 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:05:43.345 05:58:08 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:43.345 05:58:08 -- common/autotest_common.sh@10 -- # set +x 00:05:43.345 05:58:08 -- spdk/autotest.sh@59 -- # create_test_list 00:05:43.345 05:58:08 -- common/autotest_common.sh@748 -- # xtrace_disable 00:05:43.345 05:58:08 -- common/autotest_common.sh@10 -- # set +x 00:05:43.345 05:58:08 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:05:43.345 05:58:08 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:05:43.345 05:58:08 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:05:43.345 05:58:08 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:05:43.345 05:58:08 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:05:43.345 05:58:08 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:05:43.345 05:58:08 -- common/autotest_common.sh@1455 -- # uname 00:05:43.345 05:58:08 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:05:43.345 05:58:08 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:05:43.345 05:58:08 -- common/autotest_common.sh@1475 -- # uname 00:05:43.345 05:58:08 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:05:43.345 05:58:08 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:05:43.345 05:58:08 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:05:43.602 lcov: LCOV version 1.15 00:05:43.602 05:58:08 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:05:58.484 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:05:58.484 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:06:13.398 05:58:37 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:06:13.398 05:58:37 -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:13.398 05:58:37 -- common/autotest_common.sh@10 -- # set +x 00:06:13.398 05:58:37 -- spdk/autotest.sh@78 -- # rm -f 00:06:13.398 05:58:37 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:06:13.398 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:13.398 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:06:13.398 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:06:13.398 05:58:38 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:06:13.398 05:58:38 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:06:13.398 05:58:38 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:06:13.398 05:58:38 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:06:13.398 05:58:38 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:06:13.398 05:58:38 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:06:13.398 05:58:38 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:06:13.398 05:58:38 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:06:13.398 05:58:38 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:06:13.398 05:58:38 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:06:13.398 05:58:38 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:06:13.398 05:58:38 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:06:13.398 05:58:38 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:06:13.398 05:58:38 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:06:13.398 05:58:38 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:06:13.398 05:58:38 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n2 00:06:13.398 05:58:38 -- common/autotest_common.sh@1648 -- # local device=nvme1n2 00:06:13.398 05:58:38 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:06:13.398 05:58:38 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:06:13.398 05:58:38 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:06:13.398 05:58:38 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n3 00:06:13.398 05:58:38 -- common/autotest_common.sh@1648 -- # local device=nvme1n3 00:06:13.398 05:58:38 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:06:13.398 05:58:38 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:06:13.398 05:58:38 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:06:13.398 05:58:38 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:06:13.398 05:58:38 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:06:13.398 05:58:38 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:06:13.398 05:58:38 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:06:13.398 05:58:38 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:06:13.398 No valid GPT data, bailing 00:06:13.398 05:58:38 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:06:13.398 05:58:38 -- scripts/common.sh@394 -- # pt= 00:06:13.398 05:58:38 -- scripts/common.sh@395 -- # return 1 00:06:13.398 05:58:38 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:06:13.398 1+0 records in 00:06:13.398 1+0 records out 00:06:13.398 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00380193 s, 276 MB/s 00:06:13.398 05:58:38 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:06:13.398 05:58:38 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:06:13.398 05:58:38 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:06:13.398 05:58:38 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:06:13.398 05:58:38 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:06:13.398 No valid GPT data, bailing 00:06:13.398 05:58:38 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:06:13.398 05:58:38 -- scripts/common.sh@394 -- # pt= 00:06:13.398 05:58:38 -- scripts/common.sh@395 -- # return 1 00:06:13.398 05:58:38 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:06:13.398 1+0 records in 00:06:13.398 1+0 records out 00:06:13.398 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00502753 s, 209 MB/s 00:06:13.398 05:58:38 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:06:13.398 05:58:38 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:06:13.398 05:58:38 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n2 00:06:13.398 05:58:38 -- scripts/common.sh@381 -- # local block=/dev/nvme1n2 pt 00:06:13.398 05:58:38 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:06:13.398 No valid GPT data, bailing 00:06:13.398 05:58:38 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:06:13.398 05:58:38 -- scripts/common.sh@394 -- # pt= 00:06:13.398 05:58:38 -- scripts/common.sh@395 -- # return 1 00:06:13.398 05:58:38 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:06:13.398 1+0 records in 00:06:13.398 1+0 records out 00:06:13.398 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00436516 s, 240 MB/s 00:06:13.398 05:58:38 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:06:13.398 05:58:38 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:06:13.398 05:58:38 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n3 00:06:13.398 05:58:38 -- scripts/common.sh@381 -- # local block=/dev/nvme1n3 pt 00:06:13.398 05:58:38 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:06:13.398 No valid GPT data, bailing 00:06:13.398 05:58:38 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:06:13.398 05:58:38 -- scripts/common.sh@394 -- # pt= 00:06:13.398 05:58:38 -- scripts/common.sh@395 -- # return 1 00:06:13.398 05:58:38 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:06:13.398 1+0 records in 00:06:13.398 1+0 records out 00:06:13.398 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00477567 s, 220 MB/s 00:06:13.398 05:58:38 -- spdk/autotest.sh@105 -- # sync 00:06:13.398 05:58:38 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:06:13.398 05:58:38 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:06:13.398 05:58:38 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:06:15.301 05:58:40 -- spdk/autotest.sh@111 -- # uname -s 00:06:15.301 05:58:40 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:06:15.301 05:58:40 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:06:15.301 05:58:40 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:06:15.869 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:15.869 Hugepages 00:06:15.869 node hugesize free / total 00:06:15.869 node0 1048576kB 0 / 0 00:06:15.869 node0 2048kB 0 / 0 00:06:15.869 00:06:15.869 Type BDF Vendor Device NUMA Driver Device Block devices 00:06:16.129 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:06:16.129 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:06:16.129 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:06:16.129 05:58:41 -- spdk/autotest.sh@117 -- # uname -s 00:06:16.129 05:58:41 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:06:16.129 05:58:41 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:06:16.129 05:58:41 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:16.706 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:16.965 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:06:16.965 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:06:16.965 05:58:42 -- common/autotest_common.sh@1515 -- # sleep 1 00:06:17.901 05:58:43 -- common/autotest_common.sh@1516 -- # bdfs=() 00:06:17.902 05:58:43 -- common/autotest_common.sh@1516 -- # local bdfs 00:06:17.902 05:58:43 -- common/autotest_common.sh@1518 -- # bdfs=($(get_nvme_bdfs)) 00:06:17.902 05:58:43 -- common/autotest_common.sh@1518 -- # get_nvme_bdfs 00:06:17.902 05:58:43 -- common/autotest_common.sh@1496 -- # bdfs=() 00:06:17.902 05:58:43 -- common/autotest_common.sh@1496 -- # local bdfs 00:06:17.902 05:58:43 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:06:18.159 05:58:43 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:18.159 05:58:43 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:06:18.159 05:58:43 -- common/autotest_common.sh@1498 -- # (( 2 == 0 )) 00:06:18.159 05:58:43 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:06:18.159 05:58:43 -- common/autotest_common.sh@1520 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:06:18.418 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:18.418 Waiting for block devices as requested 00:06:18.418 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:06:18.676 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:06:18.676 05:58:44 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:06:18.676 05:58:44 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:06:18.676 05:58:44 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:06:18.676 05:58:44 -- common/autotest_common.sh@1485 -- # grep 0000:00:10.0/nvme/nvme 00:06:18.677 05:58:44 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:06:18.677 05:58:44 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:06:18.677 05:58:44 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:06:18.677 05:58:44 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme1 00:06:18.677 05:58:44 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme1 00:06:18.677 05:58:44 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme1 ]] 00:06:18.677 05:58:44 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme1 00:06:18.677 05:58:44 -- common/autotest_common.sh@1529 -- # grep oacs 00:06:18.677 05:58:44 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:06:18.677 05:58:44 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:06:18.677 05:58:44 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:06:18.677 05:58:44 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:06:18.677 05:58:44 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme1 00:06:18.677 05:58:44 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:06:18.677 05:58:44 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:06:18.677 05:58:44 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:06:18.677 05:58:44 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:06:18.677 05:58:44 -- common/autotest_common.sh@1541 -- # continue 00:06:18.677 05:58:44 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:06:18.677 05:58:44 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:06:18.677 05:58:44 -- common/autotest_common.sh@1485 -- # grep 0000:00:11.0/nvme/nvme 00:06:18.677 05:58:44 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:06:18.677 05:58:44 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:06:18.677 05:58:44 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:06:18.677 05:58:44 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:06:18.677 05:58:44 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme0 00:06:18.677 05:58:44 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme0 00:06:18.677 05:58:44 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme0 ]] 00:06:18.677 05:58:44 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme0 00:06:18.677 05:58:44 -- common/autotest_common.sh@1529 -- # grep oacs 00:06:18.677 05:58:44 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:06:18.677 05:58:44 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:06:18.677 05:58:44 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:06:18.677 05:58:44 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:06:18.677 05:58:44 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:06:18.677 05:58:44 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme0 00:06:18.677 05:58:44 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:06:18.677 05:58:44 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:06:18.677 05:58:44 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:06:18.677 05:58:44 -- common/autotest_common.sh@1541 -- # continue 00:06:18.677 05:58:44 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:06:18.677 05:58:44 -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:18.677 05:58:44 -- common/autotest_common.sh@10 -- # set +x 00:06:18.677 05:58:44 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:06:18.677 05:58:44 -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:18.677 05:58:44 -- common/autotest_common.sh@10 -- # set +x 00:06:18.677 05:58:44 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:19.612 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:19.612 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:06:19.612 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:06:19.612 05:58:45 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:06:19.612 05:58:45 -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:19.612 05:58:45 -- common/autotest_common.sh@10 -- # set +x 00:06:19.612 05:58:45 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:06:19.612 05:58:45 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:06:19.612 05:58:45 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:06:19.612 05:58:45 -- common/autotest_common.sh@1561 -- # bdfs=() 00:06:19.612 05:58:45 -- common/autotest_common.sh@1561 -- # _bdfs=() 00:06:19.612 05:58:45 -- common/autotest_common.sh@1561 -- # local bdfs _bdfs 00:06:19.612 05:58:45 -- common/autotest_common.sh@1562 -- # _bdfs=($(get_nvme_bdfs)) 00:06:19.613 05:58:45 -- common/autotest_common.sh@1562 -- # get_nvme_bdfs 00:06:19.613 05:58:45 -- common/autotest_common.sh@1496 -- # bdfs=() 00:06:19.613 05:58:45 -- common/autotest_common.sh@1496 -- # local bdfs 00:06:19.613 05:58:45 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:06:19.613 05:58:45 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:19.613 05:58:45 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:06:19.613 05:58:45 -- common/autotest_common.sh@1498 -- # (( 2 == 0 )) 00:06:19.613 05:58:45 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:06:19.613 05:58:45 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:06:19.871 05:58:45 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:06:19.871 05:58:45 -- common/autotest_common.sh@1564 -- # device=0x0010 00:06:19.871 05:58:45 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:06:19.871 05:58:45 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:06:19.871 05:58:45 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:06:19.871 05:58:45 -- common/autotest_common.sh@1564 -- # device=0x0010 00:06:19.871 05:58:45 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:06:19.871 05:58:45 -- common/autotest_common.sh@1570 -- # (( 0 > 0 )) 00:06:19.871 05:58:45 -- common/autotest_common.sh@1570 -- # return 0 00:06:19.871 05:58:45 -- common/autotest_common.sh@1577 -- # [[ -z '' ]] 00:06:19.871 05:58:45 -- common/autotest_common.sh@1578 -- # return 0 00:06:19.871 05:58:45 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:06:19.871 05:58:45 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:06:19.871 05:58:45 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:06:19.871 05:58:45 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:06:19.871 05:58:45 -- spdk/autotest.sh@149 -- # timing_enter lib 00:06:19.871 05:58:45 -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:19.871 05:58:45 -- common/autotest_common.sh@10 -- # set +x 00:06:19.871 05:58:45 -- spdk/autotest.sh@151 -- # [[ 1 -eq 1 ]] 00:06:19.871 05:58:45 -- spdk/autotest.sh@152 -- # export SPDK_SOCK_IMPL_DEFAULT=uring 00:06:19.871 05:58:45 -- spdk/autotest.sh@152 -- # SPDK_SOCK_IMPL_DEFAULT=uring 00:06:19.871 05:58:45 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:06:19.871 05:58:45 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:19.871 05:58:45 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:19.871 05:58:45 -- common/autotest_common.sh@10 -- # set +x 00:06:19.871 ************************************ 00:06:19.871 START TEST env 00:06:19.871 ************************************ 00:06:19.871 05:58:45 env -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:06:19.871 * Looking for test storage... 00:06:19.871 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:06:19.871 05:58:45 env -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:19.871 05:58:45 env -- common/autotest_common.sh@1681 -- # lcov --version 00:06:19.871 05:58:45 env -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:19.871 05:58:45 env -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:19.871 05:58:45 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:19.871 05:58:45 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:19.871 05:58:45 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:19.871 05:58:45 env -- scripts/common.sh@336 -- # IFS=.-: 00:06:19.871 05:58:45 env -- scripts/common.sh@336 -- # read -ra ver1 00:06:19.871 05:58:45 env -- scripts/common.sh@337 -- # IFS=.-: 00:06:19.871 05:58:45 env -- scripts/common.sh@337 -- # read -ra ver2 00:06:19.871 05:58:45 env -- scripts/common.sh@338 -- # local 'op=<' 00:06:19.871 05:58:45 env -- scripts/common.sh@340 -- # ver1_l=2 00:06:19.872 05:58:45 env -- scripts/common.sh@341 -- # ver2_l=1 00:06:19.872 05:58:45 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:19.872 05:58:45 env -- scripts/common.sh@344 -- # case "$op" in 00:06:19.872 05:58:45 env -- scripts/common.sh@345 -- # : 1 00:06:19.872 05:58:45 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:19.872 05:58:45 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:19.872 05:58:45 env -- scripts/common.sh@365 -- # decimal 1 00:06:19.872 05:58:45 env -- scripts/common.sh@353 -- # local d=1 00:06:19.872 05:58:45 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:19.872 05:58:45 env -- scripts/common.sh@355 -- # echo 1 00:06:19.872 05:58:45 env -- scripts/common.sh@365 -- # ver1[v]=1 00:06:19.872 05:58:45 env -- scripts/common.sh@366 -- # decimal 2 00:06:19.872 05:58:45 env -- scripts/common.sh@353 -- # local d=2 00:06:19.872 05:58:45 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:19.872 05:58:45 env -- scripts/common.sh@355 -- # echo 2 00:06:19.872 05:58:45 env -- scripts/common.sh@366 -- # ver2[v]=2 00:06:19.872 05:58:45 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:19.872 05:58:45 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:19.872 05:58:45 env -- scripts/common.sh@368 -- # return 0 00:06:19.872 05:58:45 env -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:19.872 05:58:45 env -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:19.872 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:19.872 --rc genhtml_branch_coverage=1 00:06:19.872 --rc genhtml_function_coverage=1 00:06:19.872 --rc genhtml_legend=1 00:06:19.872 --rc geninfo_all_blocks=1 00:06:19.872 --rc geninfo_unexecuted_blocks=1 00:06:19.872 00:06:19.872 ' 00:06:19.872 05:58:45 env -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:19.872 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:19.872 --rc genhtml_branch_coverage=1 00:06:19.872 --rc genhtml_function_coverage=1 00:06:19.872 --rc genhtml_legend=1 00:06:19.872 --rc geninfo_all_blocks=1 00:06:19.872 --rc geninfo_unexecuted_blocks=1 00:06:19.872 00:06:19.872 ' 00:06:19.872 05:58:45 env -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:19.872 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:19.872 --rc genhtml_branch_coverage=1 00:06:19.872 --rc genhtml_function_coverage=1 00:06:19.872 --rc genhtml_legend=1 00:06:19.872 --rc geninfo_all_blocks=1 00:06:19.872 --rc geninfo_unexecuted_blocks=1 00:06:19.872 00:06:19.872 ' 00:06:19.872 05:58:45 env -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:19.872 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:19.872 --rc genhtml_branch_coverage=1 00:06:19.872 --rc genhtml_function_coverage=1 00:06:19.872 --rc genhtml_legend=1 00:06:19.872 --rc geninfo_all_blocks=1 00:06:19.872 --rc geninfo_unexecuted_blocks=1 00:06:19.872 00:06:19.872 ' 00:06:19.872 05:58:45 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:06:19.872 05:58:45 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:19.872 05:58:45 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:19.872 05:58:45 env -- common/autotest_common.sh@10 -- # set +x 00:06:19.872 ************************************ 00:06:19.872 START TEST env_memory 00:06:19.872 ************************************ 00:06:19.872 05:58:45 env.env_memory -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:06:19.872 00:06:19.872 00:06:19.872 CUnit - A unit testing framework for C - Version 2.1-3 00:06:19.872 http://cunit.sourceforge.net/ 00:06:19.872 00:06:19.872 00:06:19.872 Suite: memory 00:06:20.131 Test: alloc and free memory map ...[2024-10-01 05:58:45.514311] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:06:20.131 passed 00:06:20.131 Test: mem map translation ...[2024-10-01 05:58:45.547191] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:06:20.131 [2024-10-01 05:58:45.547241] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:06:20.131 [2024-10-01 05:58:45.547307] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:06:20.131 [2024-10-01 05:58:45.547318] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:06:20.131 passed 00:06:20.131 Test: mem map registration ...[2024-10-01 05:58:45.619847] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:06:20.131 [2024-10-01 05:58:45.619889] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:06:20.131 passed 00:06:20.131 Test: mem map adjacent registrations ...passed 00:06:20.131 00:06:20.131 Run Summary: Type Total Ran Passed Failed Inactive 00:06:20.131 suites 1 1 n/a 0 0 00:06:20.131 tests 4 4 4 0 0 00:06:20.131 asserts 152 152 152 0 n/a 00:06:20.131 00:06:20.131 Elapsed time = 0.226 seconds 00:06:20.131 00:06:20.131 real 0m0.244s 00:06:20.131 user 0m0.223s 00:06:20.131 sys 0m0.014s 00:06:20.131 05:58:45 env.env_memory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:20.131 05:58:45 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:06:20.131 ************************************ 00:06:20.131 END TEST env_memory 00:06:20.131 ************************************ 00:06:20.391 05:58:45 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:06:20.391 05:58:45 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:20.391 05:58:45 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:20.391 05:58:45 env -- common/autotest_common.sh@10 -- # set +x 00:06:20.391 ************************************ 00:06:20.391 START TEST env_vtophys 00:06:20.391 ************************************ 00:06:20.391 05:58:45 env.env_vtophys -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:06:20.391 EAL: lib.eal log level changed from notice to debug 00:06:20.391 EAL: Detected lcore 0 as core 0 on socket 0 00:06:20.391 EAL: Detected lcore 1 as core 0 on socket 0 00:06:20.391 EAL: Detected lcore 2 as core 0 on socket 0 00:06:20.391 EAL: Detected lcore 3 as core 0 on socket 0 00:06:20.391 EAL: Detected lcore 4 as core 0 on socket 0 00:06:20.391 EAL: Detected lcore 5 as core 0 on socket 0 00:06:20.391 EAL: Detected lcore 6 as core 0 on socket 0 00:06:20.391 EAL: Detected lcore 7 as core 0 on socket 0 00:06:20.391 EAL: Detected lcore 8 as core 0 on socket 0 00:06:20.391 EAL: Detected lcore 9 as core 0 on socket 0 00:06:20.391 EAL: Maximum logical cores by configuration: 128 00:06:20.391 EAL: Detected CPU lcores: 10 00:06:20.391 EAL: Detected NUMA nodes: 1 00:06:20.391 EAL: Checking presence of .so 'librte_eal.so.23.0' 00:06:20.391 EAL: Detected shared linkage of DPDK 00:06:20.391 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23.0 00:06:20.391 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23.0 00:06:20.391 EAL: Registered [vdev] bus. 00:06:20.391 EAL: bus.vdev log level changed from disabled to notice 00:06:20.391 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23.0 00:06:20.391 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23.0 00:06:20.391 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:06:20.391 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:06:20.391 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:06:20.391 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:06:20.391 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:06:20.391 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:06:20.391 EAL: No shared files mode enabled, IPC will be disabled 00:06:20.391 EAL: No shared files mode enabled, IPC is disabled 00:06:20.391 EAL: Selected IOVA mode 'PA' 00:06:20.391 EAL: Probing VFIO support... 00:06:20.391 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:06:20.391 EAL: VFIO modules not loaded, skipping VFIO support... 00:06:20.391 EAL: Ask a virtual area of 0x2e000 bytes 00:06:20.391 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:06:20.391 EAL: Setting up physically contiguous memory... 00:06:20.391 EAL: Setting maximum number of open files to 524288 00:06:20.391 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:06:20.391 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:06:20.391 EAL: Ask a virtual area of 0x61000 bytes 00:06:20.391 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:06:20.391 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:20.391 EAL: Ask a virtual area of 0x400000000 bytes 00:06:20.391 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:06:20.391 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:06:20.391 EAL: Ask a virtual area of 0x61000 bytes 00:06:20.391 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:06:20.391 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:20.391 EAL: Ask a virtual area of 0x400000000 bytes 00:06:20.391 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:06:20.391 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:06:20.391 EAL: Ask a virtual area of 0x61000 bytes 00:06:20.391 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:06:20.391 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:20.391 EAL: Ask a virtual area of 0x400000000 bytes 00:06:20.391 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:06:20.392 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:06:20.392 EAL: Ask a virtual area of 0x61000 bytes 00:06:20.392 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:06:20.392 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:20.392 EAL: Ask a virtual area of 0x400000000 bytes 00:06:20.392 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:06:20.392 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:06:20.392 EAL: Hugepages will be freed exactly as allocated. 00:06:20.392 EAL: No shared files mode enabled, IPC is disabled 00:06:20.392 EAL: No shared files mode enabled, IPC is disabled 00:06:20.392 EAL: TSC frequency is ~2200000 KHz 00:06:20.392 EAL: Main lcore 0 is ready (tid=7f8469e5ca00;cpuset=[0]) 00:06:20.392 EAL: Trying to obtain current memory policy. 00:06:20.392 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:20.392 EAL: Restoring previous memory policy: 0 00:06:20.392 EAL: request: mp_malloc_sync 00:06:20.392 EAL: No shared files mode enabled, IPC is disabled 00:06:20.392 EAL: Heap on socket 0 was expanded by 2MB 00:06:20.392 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:06:20.392 EAL: No shared files mode enabled, IPC is disabled 00:06:20.392 EAL: No PCI address specified using 'addr=' in: bus=pci 00:06:20.392 EAL: Mem event callback 'spdk:(nil)' registered 00:06:20.392 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:06:20.392 00:06:20.392 00:06:20.392 CUnit - A unit testing framework for C - Version 2.1-3 00:06:20.392 http://cunit.sourceforge.net/ 00:06:20.392 00:06:20.392 00:06:20.392 Suite: components_suite 00:06:20.392 Test: vtophys_malloc_test ...passed 00:06:20.392 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:06:20.392 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:20.392 EAL: Restoring previous memory policy: 4 00:06:20.392 EAL: Calling mem event callback 'spdk:(nil)' 00:06:20.392 EAL: request: mp_malloc_sync 00:06:20.392 EAL: No shared files mode enabled, IPC is disabled 00:06:20.392 EAL: Heap on socket 0 was expanded by 4MB 00:06:20.392 EAL: Calling mem event callback 'spdk:(nil)' 00:06:20.392 EAL: request: mp_malloc_sync 00:06:20.392 EAL: No shared files mode enabled, IPC is disabled 00:06:20.392 EAL: Heap on socket 0 was shrunk by 4MB 00:06:20.392 EAL: Trying to obtain current memory policy. 00:06:20.392 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:20.392 EAL: Restoring previous memory policy: 4 00:06:20.392 EAL: Calling mem event callback 'spdk:(nil)' 00:06:20.392 EAL: request: mp_malloc_sync 00:06:20.392 EAL: No shared files mode enabled, IPC is disabled 00:06:20.392 EAL: Heap on socket 0 was expanded by 6MB 00:06:20.392 EAL: Calling mem event callback 'spdk:(nil)' 00:06:20.392 EAL: request: mp_malloc_sync 00:06:20.392 EAL: No shared files mode enabled, IPC is disabled 00:06:20.392 EAL: Heap on socket 0 was shrunk by 6MB 00:06:20.392 EAL: Trying to obtain current memory policy. 00:06:20.392 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:20.392 EAL: Restoring previous memory policy: 4 00:06:20.392 EAL: Calling mem event callback 'spdk:(nil)' 00:06:20.392 EAL: request: mp_malloc_sync 00:06:20.392 EAL: No shared files mode enabled, IPC is disabled 00:06:20.392 EAL: Heap on socket 0 was expanded by 10MB 00:06:20.392 EAL: Calling mem event callback 'spdk:(nil)' 00:06:20.392 EAL: request: mp_malloc_sync 00:06:20.392 EAL: No shared files mode enabled, IPC is disabled 00:06:20.392 EAL: Heap on socket 0 was shrunk by 10MB 00:06:20.392 EAL: Trying to obtain current memory policy. 00:06:20.392 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:20.392 EAL: Restoring previous memory policy: 4 00:06:20.392 EAL: Calling mem event callback 'spdk:(nil)' 00:06:20.392 EAL: request: mp_malloc_sync 00:06:20.392 EAL: No shared files mode enabled, IPC is disabled 00:06:20.392 EAL: Heap on socket 0 was expanded by 18MB 00:06:20.392 EAL: Calling mem event callback 'spdk:(nil)' 00:06:20.392 EAL: request: mp_malloc_sync 00:06:20.392 EAL: No shared files mode enabled, IPC is disabled 00:06:20.392 EAL: Heap on socket 0 was shrunk by 18MB 00:06:20.392 EAL: Trying to obtain current memory policy. 00:06:20.392 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:20.392 EAL: Restoring previous memory policy: 4 00:06:20.392 EAL: Calling mem event callback 'spdk:(nil)' 00:06:20.392 EAL: request: mp_malloc_sync 00:06:20.392 EAL: No shared files mode enabled, IPC is disabled 00:06:20.392 EAL: Heap on socket 0 was expanded by 34MB 00:06:20.392 EAL: Calling mem event callback 'spdk:(nil)' 00:06:20.392 EAL: request: mp_malloc_sync 00:06:20.392 EAL: No shared files mode enabled, IPC is disabled 00:06:20.392 EAL: Heap on socket 0 was shrunk by 34MB 00:06:20.392 EAL: Trying to obtain current memory policy. 00:06:20.392 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:20.392 EAL: Restoring previous memory policy: 4 00:06:20.392 EAL: Calling mem event callback 'spdk:(nil)' 00:06:20.392 EAL: request: mp_malloc_sync 00:06:20.392 EAL: No shared files mode enabled, IPC is disabled 00:06:20.392 EAL: Heap on socket 0 was expanded by 66MB 00:06:20.392 EAL: Calling mem event callback 'spdk:(nil)' 00:06:20.392 EAL: request: mp_malloc_sync 00:06:20.392 EAL: No shared files mode enabled, IPC is disabled 00:06:20.392 EAL: Heap on socket 0 was shrunk by 66MB 00:06:20.392 EAL: Trying to obtain current memory policy. 00:06:20.392 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:20.392 EAL: Restoring previous memory policy: 4 00:06:20.392 EAL: Calling mem event callback 'spdk:(nil)' 00:06:20.392 EAL: request: mp_malloc_sync 00:06:20.392 EAL: No shared files mode enabled, IPC is disabled 00:06:20.392 EAL: Heap on socket 0 was expanded by 130MB 00:06:20.392 EAL: Calling mem event callback 'spdk:(nil)' 00:06:20.652 EAL: request: mp_malloc_sync 00:06:20.652 EAL: No shared files mode enabled, IPC is disabled 00:06:20.652 EAL: Heap on socket 0 was shrunk by 130MB 00:06:20.652 EAL: Trying to obtain current memory policy. 00:06:20.652 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:20.652 EAL: Restoring previous memory policy: 4 00:06:20.652 EAL: Calling mem event callback 'spdk:(nil)' 00:06:20.652 EAL: request: mp_malloc_sync 00:06:20.652 EAL: No shared files mode enabled, IPC is disabled 00:06:20.652 EAL: Heap on socket 0 was expanded by 258MB 00:06:20.652 EAL: Calling mem event callback 'spdk:(nil)' 00:06:20.652 EAL: request: mp_malloc_sync 00:06:20.652 EAL: No shared files mode enabled, IPC is disabled 00:06:20.652 EAL: Heap on socket 0 was shrunk by 258MB 00:06:20.652 EAL: Trying to obtain current memory policy. 00:06:20.652 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:20.652 EAL: Restoring previous memory policy: 4 00:06:20.652 EAL: Calling mem event callback 'spdk:(nil)' 00:06:20.652 EAL: request: mp_malloc_sync 00:06:20.652 EAL: No shared files mode enabled, IPC is disabled 00:06:20.652 EAL: Heap on socket 0 was expanded by 514MB 00:06:20.652 EAL: Calling mem event callback 'spdk:(nil)' 00:06:20.912 EAL: request: mp_malloc_sync 00:06:20.912 EAL: No shared files mode enabled, IPC is disabled 00:06:20.912 EAL: Heap on socket 0 was shrunk by 514MB 00:06:20.912 EAL: Trying to obtain current memory policy. 00:06:20.912 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:20.912 EAL: Restoring previous memory policy: 4 00:06:20.912 EAL: Calling mem event callback 'spdk:(nil)' 00:06:20.912 EAL: request: mp_malloc_sync 00:06:20.912 EAL: No shared files mode enabled, IPC is disabled 00:06:20.912 EAL: Heap on socket 0 was expanded by 1026MB 00:06:21.171 EAL: Calling mem event callback 'spdk:(nil)' 00:06:21.171 EAL: request: mp_malloc_sync 00:06:21.171 passed 00:06:21.171 00:06:21.171 Run Summary: Type Total Ran Passed Failed Inactive 00:06:21.171 suites 1 1 n/a 0 0 00:06:21.171 tests 2 2 2 0 0 00:06:21.171 asserts 5708 5708 5708 0 n/a 00:06:21.171 00:06:21.171 Elapsed time = 0.708 seconds 00:06:21.171 EAL: No shared files mode enabled, IPC is disabled 00:06:21.171 EAL: Heap on socket 0 was shrunk by 1026MB 00:06:21.171 EAL: Calling mem event callback 'spdk:(nil)' 00:06:21.171 EAL: request: mp_malloc_sync 00:06:21.171 EAL: No shared files mode enabled, IPC is disabled 00:06:21.171 EAL: Heap on socket 0 was shrunk by 2MB 00:06:21.171 EAL: No shared files mode enabled, IPC is disabled 00:06:21.171 EAL: No shared files mode enabled, IPC is disabled 00:06:21.171 EAL: No shared files mode enabled, IPC is disabled 00:06:21.171 ************************************ 00:06:21.171 END TEST env_vtophys 00:06:21.171 ************************************ 00:06:21.171 00:06:21.171 real 0m0.904s 00:06:21.171 user 0m0.461s 00:06:21.171 sys 0m0.309s 00:06:21.171 05:58:46 env.env_vtophys -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:21.171 05:58:46 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:06:21.171 05:58:46 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:06:21.171 05:58:46 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:21.171 05:58:46 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:21.171 05:58:46 env -- common/autotest_common.sh@10 -- # set +x 00:06:21.171 ************************************ 00:06:21.171 START TEST env_pci 00:06:21.171 ************************************ 00:06:21.171 05:58:46 env.env_pci -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:06:21.171 00:06:21.171 00:06:21.171 CUnit - A unit testing framework for C - Version 2.1-3 00:06:21.171 http://cunit.sourceforge.net/ 00:06:21.171 00:06:21.171 00:06:21.171 Suite: pci 00:06:21.171 Test: pci_hook ...[2024-10-01 05:58:46.726622] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1049:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 68838 has claimed it 00:06:21.171 passed 00:06:21.171 00:06:21.171 Run Summary: Type Total Ran Passed Failed Inactive 00:06:21.171 suites 1 1 n/a 0 0 00:06:21.171 tests 1 1 1 0 0 00:06:21.171 asserts 25 25 25 0 n/a 00:06:21.171 00:06:21.171 Elapsed time = 0.002 seconds 00:06:21.171 EAL: Cannot find device (10000:00:01.0) 00:06:21.171 EAL: Failed to attach device on primary process 00:06:21.171 00:06:21.171 real 0m0.020s 00:06:21.171 user 0m0.012s 00:06:21.171 sys 0m0.008s 00:06:21.171 05:58:46 env.env_pci -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:21.171 05:58:46 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:06:21.171 ************************************ 00:06:21.171 END TEST env_pci 00:06:21.171 ************************************ 00:06:21.171 05:58:46 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:06:21.171 05:58:46 env -- env/env.sh@15 -- # uname 00:06:21.171 05:58:46 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:06:21.171 05:58:46 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:06:21.171 05:58:46 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:21.171 05:58:46 env -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:06:21.171 05:58:46 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:21.171 05:58:46 env -- common/autotest_common.sh@10 -- # set +x 00:06:21.431 ************************************ 00:06:21.431 START TEST env_dpdk_post_init 00:06:21.431 ************************************ 00:06:21.431 05:58:46 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:21.431 EAL: Detected CPU lcores: 10 00:06:21.431 EAL: Detected NUMA nodes: 1 00:06:21.431 EAL: Detected shared linkage of DPDK 00:06:21.431 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:21.431 EAL: Selected IOVA mode 'PA' 00:06:21.431 TELEMETRY: No legacy callbacks, legacy socket not created 00:06:21.431 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:06:21.431 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:06:21.431 Starting DPDK initialization... 00:06:21.431 Starting SPDK post initialization... 00:06:21.431 SPDK NVMe probe 00:06:21.431 Attaching to 0000:00:10.0 00:06:21.431 Attaching to 0000:00:11.0 00:06:21.431 Attached to 0000:00:10.0 00:06:21.431 Attached to 0000:00:11.0 00:06:21.431 Cleaning up... 00:06:21.431 00:06:21.431 real 0m0.175s 00:06:21.431 user 0m0.045s 00:06:21.431 sys 0m0.030s 00:06:21.431 05:58:46 env.env_dpdk_post_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:21.431 05:58:46 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:06:21.431 ************************************ 00:06:21.431 END TEST env_dpdk_post_init 00:06:21.431 ************************************ 00:06:21.431 05:58:47 env -- env/env.sh@26 -- # uname 00:06:21.431 05:58:47 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:06:21.431 05:58:47 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:06:21.431 05:58:47 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:21.431 05:58:47 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:21.431 05:58:47 env -- common/autotest_common.sh@10 -- # set +x 00:06:21.431 ************************************ 00:06:21.431 START TEST env_mem_callbacks 00:06:21.431 ************************************ 00:06:21.431 05:58:47 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:06:21.431 EAL: Detected CPU lcores: 10 00:06:21.431 EAL: Detected NUMA nodes: 1 00:06:21.431 EAL: Detected shared linkage of DPDK 00:06:21.691 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:21.691 EAL: Selected IOVA mode 'PA' 00:06:21.691 TELEMETRY: No legacy callbacks, legacy socket not created 00:06:21.691 00:06:21.691 00:06:21.691 CUnit - A unit testing framework for C - Version 2.1-3 00:06:21.691 http://cunit.sourceforge.net/ 00:06:21.691 00:06:21.691 00:06:21.691 Suite: memory 00:06:21.691 Test: test ... 00:06:21.691 register 0x200000200000 2097152 00:06:21.691 malloc 3145728 00:06:21.691 register 0x200000400000 4194304 00:06:21.691 buf 0x200000500000 len 3145728 PASSED 00:06:21.691 malloc 64 00:06:21.691 buf 0x2000004fff40 len 64 PASSED 00:06:21.691 malloc 4194304 00:06:21.691 register 0x200000800000 6291456 00:06:21.691 buf 0x200000a00000 len 4194304 PASSED 00:06:21.691 free 0x200000500000 3145728 00:06:21.691 free 0x2000004fff40 64 00:06:21.691 unregister 0x200000400000 4194304 PASSED 00:06:21.691 free 0x200000a00000 4194304 00:06:21.691 unregister 0x200000800000 6291456 PASSED 00:06:21.691 malloc 8388608 00:06:21.691 register 0x200000400000 10485760 00:06:21.691 buf 0x200000600000 len 8388608 PASSED 00:06:21.691 free 0x200000600000 8388608 00:06:21.691 unregister 0x200000400000 10485760 PASSED 00:06:21.691 passed 00:06:21.691 00:06:21.691 Run Summary: Type Total Ran Passed Failed Inactive 00:06:21.691 suites 1 1 n/a 0 0 00:06:21.691 tests 1 1 1 0 0 00:06:21.691 asserts 15 15 15 0 n/a 00:06:21.691 00:06:21.691 Elapsed time = 0.007 seconds 00:06:21.691 00:06:21.691 real 0m0.138s 00:06:21.691 user 0m0.013s 00:06:21.691 sys 0m0.021s 00:06:21.691 05:58:47 env.env_mem_callbacks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:21.691 05:58:47 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:06:21.691 ************************************ 00:06:21.691 END TEST env_mem_callbacks 00:06:21.691 ************************************ 00:06:21.691 00:06:21.691 real 0m1.939s 00:06:21.691 user 0m0.959s 00:06:21.691 sys 0m0.611s 00:06:21.691 05:58:47 env -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:21.691 05:58:47 env -- common/autotest_common.sh@10 -- # set +x 00:06:21.691 ************************************ 00:06:21.691 END TEST env 00:06:21.691 ************************************ 00:06:21.691 05:58:47 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:06:21.691 05:58:47 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:21.691 05:58:47 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:21.691 05:58:47 -- common/autotest_common.sh@10 -- # set +x 00:06:21.691 ************************************ 00:06:21.691 START TEST rpc 00:06:21.691 ************************************ 00:06:21.691 05:58:47 rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:06:21.950 * Looking for test storage... 00:06:21.950 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:06:21.950 05:58:47 rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:21.950 05:58:47 rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:06:21.950 05:58:47 rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:21.950 05:58:47 rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:21.950 05:58:47 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:21.950 05:58:47 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:21.950 05:58:47 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:21.950 05:58:47 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:21.950 05:58:47 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:21.950 05:58:47 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:21.950 05:58:47 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:21.950 05:58:47 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:21.950 05:58:47 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:21.950 05:58:47 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:21.950 05:58:47 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:21.950 05:58:47 rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:21.950 05:58:47 rpc -- scripts/common.sh@345 -- # : 1 00:06:21.950 05:58:47 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:21.950 05:58:47 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:21.950 05:58:47 rpc -- scripts/common.sh@365 -- # decimal 1 00:06:21.950 05:58:47 rpc -- scripts/common.sh@353 -- # local d=1 00:06:21.950 05:58:47 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:21.950 05:58:47 rpc -- scripts/common.sh@355 -- # echo 1 00:06:21.950 05:58:47 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:21.950 05:58:47 rpc -- scripts/common.sh@366 -- # decimal 2 00:06:21.950 05:58:47 rpc -- scripts/common.sh@353 -- # local d=2 00:06:21.950 05:58:47 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:21.950 05:58:47 rpc -- scripts/common.sh@355 -- # echo 2 00:06:21.950 05:58:47 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:21.950 05:58:47 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:21.950 05:58:47 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:21.950 05:58:47 rpc -- scripts/common.sh@368 -- # return 0 00:06:21.950 05:58:47 rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:21.951 05:58:47 rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:21.951 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:21.951 --rc genhtml_branch_coverage=1 00:06:21.951 --rc genhtml_function_coverage=1 00:06:21.951 --rc genhtml_legend=1 00:06:21.951 --rc geninfo_all_blocks=1 00:06:21.951 --rc geninfo_unexecuted_blocks=1 00:06:21.951 00:06:21.951 ' 00:06:21.951 05:58:47 rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:21.951 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:21.951 --rc genhtml_branch_coverage=1 00:06:21.951 --rc genhtml_function_coverage=1 00:06:21.951 --rc genhtml_legend=1 00:06:21.951 --rc geninfo_all_blocks=1 00:06:21.951 --rc geninfo_unexecuted_blocks=1 00:06:21.951 00:06:21.951 ' 00:06:21.951 05:58:47 rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:21.951 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:21.951 --rc genhtml_branch_coverage=1 00:06:21.951 --rc genhtml_function_coverage=1 00:06:21.951 --rc genhtml_legend=1 00:06:21.951 --rc geninfo_all_blocks=1 00:06:21.951 --rc geninfo_unexecuted_blocks=1 00:06:21.951 00:06:21.951 ' 00:06:21.951 05:58:47 rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:21.951 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:21.951 --rc genhtml_branch_coverage=1 00:06:21.951 --rc genhtml_function_coverage=1 00:06:21.951 --rc genhtml_legend=1 00:06:21.951 --rc geninfo_all_blocks=1 00:06:21.951 --rc geninfo_unexecuted_blocks=1 00:06:21.951 00:06:21.951 ' 00:06:21.951 05:58:47 rpc -- rpc/rpc.sh@65 -- # spdk_pid=68955 00:06:21.951 05:58:47 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:06:21.951 05:58:47 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:21.951 05:58:47 rpc -- rpc/rpc.sh@67 -- # waitforlisten 68955 00:06:21.951 05:58:47 rpc -- common/autotest_common.sh@831 -- # '[' -z 68955 ']' 00:06:21.951 05:58:47 rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:21.951 05:58:47 rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:21.951 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:21.951 05:58:47 rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:21.951 05:58:47 rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:21.951 05:58:47 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:21.951 [2024-10-01 05:58:47.511804] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:06:21.951 [2024-10-01 05:58:47.511927] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68955 ] 00:06:22.211 [2024-10-01 05:58:47.649574] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.211 [2024-10-01 05:58:47.686849] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:06:22.211 [2024-10-01 05:58:47.686916] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 68955' to capture a snapshot of events at runtime. 00:06:22.211 [2024-10-01 05:58:47.686935] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:22.211 [2024-10-01 05:58:47.686947] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:22.211 [2024-10-01 05:58:47.686971] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid68955 for offline analysis/debug. 00:06:22.211 [2024-10-01 05:58:47.687018] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.211 [2024-10-01 05:58:47.724752] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:22.470 05:58:47 rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:22.470 05:58:47 rpc -- common/autotest_common.sh@864 -- # return 0 00:06:22.470 05:58:47 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:06:22.470 05:58:47 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:06:22.470 05:58:47 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:06:22.470 05:58:47 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:06:22.470 05:58:47 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:22.470 05:58:47 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:22.470 05:58:47 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:22.470 ************************************ 00:06:22.470 START TEST rpc_integrity 00:06:22.470 ************************************ 00:06:22.470 05:58:47 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:06:22.470 05:58:47 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:22.470 05:58:47 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:22.470 05:58:47 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:22.470 05:58:47 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:22.470 05:58:47 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:22.470 05:58:47 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:22.470 05:58:47 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:22.470 05:58:47 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:22.470 05:58:47 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:22.470 05:58:47 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:22.470 05:58:47 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:22.470 05:58:47 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:06:22.470 05:58:47 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:22.471 05:58:47 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:22.471 05:58:47 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:22.471 05:58:47 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:22.471 05:58:47 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:22.471 { 00:06:22.471 "name": "Malloc0", 00:06:22.471 "aliases": [ 00:06:22.471 "ffbb5d97-484c-4060-b939-71c908d7f8be" 00:06:22.471 ], 00:06:22.471 "product_name": "Malloc disk", 00:06:22.471 "block_size": 512, 00:06:22.471 "num_blocks": 16384, 00:06:22.471 "uuid": "ffbb5d97-484c-4060-b939-71c908d7f8be", 00:06:22.471 "assigned_rate_limits": { 00:06:22.471 "rw_ios_per_sec": 0, 00:06:22.471 "rw_mbytes_per_sec": 0, 00:06:22.471 "r_mbytes_per_sec": 0, 00:06:22.471 "w_mbytes_per_sec": 0 00:06:22.471 }, 00:06:22.471 "claimed": false, 00:06:22.471 "zoned": false, 00:06:22.471 "supported_io_types": { 00:06:22.471 "read": true, 00:06:22.471 "write": true, 00:06:22.471 "unmap": true, 00:06:22.471 "flush": true, 00:06:22.471 "reset": true, 00:06:22.471 "nvme_admin": false, 00:06:22.471 "nvme_io": false, 00:06:22.471 "nvme_io_md": false, 00:06:22.471 "write_zeroes": true, 00:06:22.471 "zcopy": true, 00:06:22.471 "get_zone_info": false, 00:06:22.471 "zone_management": false, 00:06:22.471 "zone_append": false, 00:06:22.471 "compare": false, 00:06:22.471 "compare_and_write": false, 00:06:22.471 "abort": true, 00:06:22.471 "seek_hole": false, 00:06:22.471 "seek_data": false, 00:06:22.471 "copy": true, 00:06:22.471 "nvme_iov_md": false 00:06:22.471 }, 00:06:22.471 "memory_domains": [ 00:06:22.471 { 00:06:22.471 "dma_device_id": "system", 00:06:22.471 "dma_device_type": 1 00:06:22.471 }, 00:06:22.471 { 00:06:22.471 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:22.471 "dma_device_type": 2 00:06:22.471 } 00:06:22.471 ], 00:06:22.471 "driver_specific": {} 00:06:22.471 } 00:06:22.471 ]' 00:06:22.471 05:58:47 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:22.471 05:58:48 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:22.471 05:58:48 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:06:22.471 05:58:48 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:22.471 05:58:48 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:22.471 [2024-10-01 05:58:48.013787] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:06:22.471 [2024-10-01 05:58:48.013833] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:22.471 [2024-10-01 05:58:48.013874] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xf9a500 00:06:22.471 [2024-10-01 05:58:48.013887] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:22.471 [2024-10-01 05:58:48.015498] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:22.471 [2024-10-01 05:58:48.015534] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:22.471 Passthru0 00:06:22.471 05:58:48 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:22.471 05:58:48 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:22.471 05:58:48 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:22.471 05:58:48 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:22.471 05:58:48 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:22.471 05:58:48 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:22.471 { 00:06:22.471 "name": "Malloc0", 00:06:22.471 "aliases": [ 00:06:22.471 "ffbb5d97-484c-4060-b939-71c908d7f8be" 00:06:22.471 ], 00:06:22.471 "product_name": "Malloc disk", 00:06:22.471 "block_size": 512, 00:06:22.471 "num_blocks": 16384, 00:06:22.471 "uuid": "ffbb5d97-484c-4060-b939-71c908d7f8be", 00:06:22.471 "assigned_rate_limits": { 00:06:22.471 "rw_ios_per_sec": 0, 00:06:22.471 "rw_mbytes_per_sec": 0, 00:06:22.471 "r_mbytes_per_sec": 0, 00:06:22.471 "w_mbytes_per_sec": 0 00:06:22.471 }, 00:06:22.471 "claimed": true, 00:06:22.471 "claim_type": "exclusive_write", 00:06:22.471 "zoned": false, 00:06:22.471 "supported_io_types": { 00:06:22.471 "read": true, 00:06:22.471 "write": true, 00:06:22.471 "unmap": true, 00:06:22.471 "flush": true, 00:06:22.471 "reset": true, 00:06:22.471 "nvme_admin": false, 00:06:22.471 "nvme_io": false, 00:06:22.471 "nvme_io_md": false, 00:06:22.471 "write_zeroes": true, 00:06:22.471 "zcopy": true, 00:06:22.471 "get_zone_info": false, 00:06:22.471 "zone_management": false, 00:06:22.471 "zone_append": false, 00:06:22.471 "compare": false, 00:06:22.471 "compare_and_write": false, 00:06:22.471 "abort": true, 00:06:22.471 "seek_hole": false, 00:06:22.471 "seek_data": false, 00:06:22.471 "copy": true, 00:06:22.471 "nvme_iov_md": false 00:06:22.471 }, 00:06:22.471 "memory_domains": [ 00:06:22.471 { 00:06:22.471 "dma_device_id": "system", 00:06:22.471 "dma_device_type": 1 00:06:22.471 }, 00:06:22.471 { 00:06:22.471 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:22.471 "dma_device_type": 2 00:06:22.471 } 00:06:22.471 ], 00:06:22.471 "driver_specific": {} 00:06:22.471 }, 00:06:22.471 { 00:06:22.471 "name": "Passthru0", 00:06:22.471 "aliases": [ 00:06:22.471 "dbe92081-d095-58d8-ab99-68e43ac66271" 00:06:22.471 ], 00:06:22.471 "product_name": "passthru", 00:06:22.471 "block_size": 512, 00:06:22.471 "num_blocks": 16384, 00:06:22.471 "uuid": "dbe92081-d095-58d8-ab99-68e43ac66271", 00:06:22.471 "assigned_rate_limits": { 00:06:22.471 "rw_ios_per_sec": 0, 00:06:22.471 "rw_mbytes_per_sec": 0, 00:06:22.471 "r_mbytes_per_sec": 0, 00:06:22.471 "w_mbytes_per_sec": 0 00:06:22.471 }, 00:06:22.471 "claimed": false, 00:06:22.471 "zoned": false, 00:06:22.471 "supported_io_types": { 00:06:22.471 "read": true, 00:06:22.471 "write": true, 00:06:22.471 "unmap": true, 00:06:22.471 "flush": true, 00:06:22.471 "reset": true, 00:06:22.471 "nvme_admin": false, 00:06:22.471 "nvme_io": false, 00:06:22.471 "nvme_io_md": false, 00:06:22.471 "write_zeroes": true, 00:06:22.471 "zcopy": true, 00:06:22.471 "get_zone_info": false, 00:06:22.471 "zone_management": false, 00:06:22.471 "zone_append": false, 00:06:22.471 "compare": false, 00:06:22.471 "compare_and_write": false, 00:06:22.471 "abort": true, 00:06:22.471 "seek_hole": false, 00:06:22.471 "seek_data": false, 00:06:22.471 "copy": true, 00:06:22.471 "nvme_iov_md": false 00:06:22.471 }, 00:06:22.471 "memory_domains": [ 00:06:22.471 { 00:06:22.471 "dma_device_id": "system", 00:06:22.471 "dma_device_type": 1 00:06:22.471 }, 00:06:22.471 { 00:06:22.471 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:22.471 "dma_device_type": 2 00:06:22.471 } 00:06:22.471 ], 00:06:22.471 "driver_specific": { 00:06:22.471 "passthru": { 00:06:22.471 "name": "Passthru0", 00:06:22.471 "base_bdev_name": "Malloc0" 00:06:22.471 } 00:06:22.471 } 00:06:22.471 } 00:06:22.471 ]' 00:06:22.471 05:58:48 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:22.731 05:58:48 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:22.731 05:58:48 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:22.731 05:58:48 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:22.731 05:58:48 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:22.731 05:58:48 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:22.731 05:58:48 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:06:22.731 05:58:48 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:22.731 05:58:48 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:22.731 05:58:48 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:22.731 05:58:48 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:22.731 05:58:48 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:22.731 05:58:48 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:22.731 05:58:48 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:22.731 05:58:48 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:22.731 05:58:48 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:22.731 05:58:48 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:22.731 00:06:22.731 real 0m0.329s 00:06:22.731 user 0m0.221s 00:06:22.731 sys 0m0.042s 00:06:22.731 05:58:48 rpc.rpc_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:22.731 05:58:48 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:22.731 ************************************ 00:06:22.731 END TEST rpc_integrity 00:06:22.731 ************************************ 00:06:22.731 05:58:48 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:06:22.731 05:58:48 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:22.731 05:58:48 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:22.731 05:58:48 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:22.731 ************************************ 00:06:22.731 START TEST rpc_plugins 00:06:22.731 ************************************ 00:06:22.731 05:58:48 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # rpc_plugins 00:06:22.731 05:58:48 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:06:22.731 05:58:48 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:22.731 05:58:48 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:22.731 05:58:48 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:22.731 05:58:48 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:06:22.731 05:58:48 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:06:22.731 05:58:48 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:22.731 05:58:48 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:22.731 05:58:48 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:22.731 05:58:48 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:06:22.731 { 00:06:22.731 "name": "Malloc1", 00:06:22.731 "aliases": [ 00:06:22.731 "8d62812c-ebab-4f1f-8d6e-99d47ef3738b" 00:06:22.731 ], 00:06:22.731 "product_name": "Malloc disk", 00:06:22.731 "block_size": 4096, 00:06:22.731 "num_blocks": 256, 00:06:22.731 "uuid": "8d62812c-ebab-4f1f-8d6e-99d47ef3738b", 00:06:22.731 "assigned_rate_limits": { 00:06:22.731 "rw_ios_per_sec": 0, 00:06:22.731 "rw_mbytes_per_sec": 0, 00:06:22.731 "r_mbytes_per_sec": 0, 00:06:22.731 "w_mbytes_per_sec": 0 00:06:22.731 }, 00:06:22.731 "claimed": false, 00:06:22.731 "zoned": false, 00:06:22.731 "supported_io_types": { 00:06:22.731 "read": true, 00:06:22.731 "write": true, 00:06:22.731 "unmap": true, 00:06:22.731 "flush": true, 00:06:22.731 "reset": true, 00:06:22.731 "nvme_admin": false, 00:06:22.731 "nvme_io": false, 00:06:22.731 "nvme_io_md": false, 00:06:22.731 "write_zeroes": true, 00:06:22.731 "zcopy": true, 00:06:22.731 "get_zone_info": false, 00:06:22.731 "zone_management": false, 00:06:22.731 "zone_append": false, 00:06:22.731 "compare": false, 00:06:22.731 "compare_and_write": false, 00:06:22.731 "abort": true, 00:06:22.731 "seek_hole": false, 00:06:22.731 "seek_data": false, 00:06:22.731 "copy": true, 00:06:22.731 "nvme_iov_md": false 00:06:22.731 }, 00:06:22.731 "memory_domains": [ 00:06:22.731 { 00:06:22.731 "dma_device_id": "system", 00:06:22.731 "dma_device_type": 1 00:06:22.731 }, 00:06:22.731 { 00:06:22.731 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:22.731 "dma_device_type": 2 00:06:22.731 } 00:06:22.731 ], 00:06:22.731 "driver_specific": {} 00:06:22.731 } 00:06:22.731 ]' 00:06:22.731 05:58:48 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:06:22.731 05:58:48 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:06:22.731 05:58:48 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:06:22.731 05:58:48 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:22.731 05:58:48 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:22.731 05:58:48 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:22.731 05:58:48 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:06:22.731 05:58:48 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:22.731 05:58:48 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:22.991 05:58:48 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:22.991 05:58:48 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:06:22.991 05:58:48 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:06:22.991 05:58:48 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:06:22.991 00:06:22.991 real 0m0.172s 00:06:22.991 user 0m0.112s 00:06:22.991 sys 0m0.022s 00:06:22.991 05:58:48 rpc.rpc_plugins -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:22.991 05:58:48 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:22.991 ************************************ 00:06:22.991 END TEST rpc_plugins 00:06:22.991 ************************************ 00:06:22.991 05:58:48 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:06:22.991 05:58:48 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:22.991 05:58:48 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:22.991 05:58:48 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:22.991 ************************************ 00:06:22.991 START TEST rpc_trace_cmd_test 00:06:22.991 ************************************ 00:06:22.991 05:58:48 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # rpc_trace_cmd_test 00:06:22.991 05:58:48 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:06:22.991 05:58:48 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:06:22.991 05:58:48 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:22.991 05:58:48 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:22.991 05:58:48 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:22.991 05:58:48 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:06:22.991 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid68955", 00:06:22.991 "tpoint_group_mask": "0x8", 00:06:22.991 "iscsi_conn": { 00:06:22.991 "mask": "0x2", 00:06:22.991 "tpoint_mask": "0x0" 00:06:22.991 }, 00:06:22.991 "scsi": { 00:06:22.991 "mask": "0x4", 00:06:22.991 "tpoint_mask": "0x0" 00:06:22.991 }, 00:06:22.991 "bdev": { 00:06:22.991 "mask": "0x8", 00:06:22.991 "tpoint_mask": "0xffffffffffffffff" 00:06:22.991 }, 00:06:22.991 "nvmf_rdma": { 00:06:22.991 "mask": "0x10", 00:06:22.991 "tpoint_mask": "0x0" 00:06:22.991 }, 00:06:22.991 "nvmf_tcp": { 00:06:22.991 "mask": "0x20", 00:06:22.991 "tpoint_mask": "0x0" 00:06:22.991 }, 00:06:22.991 "ftl": { 00:06:22.991 "mask": "0x40", 00:06:22.991 "tpoint_mask": "0x0" 00:06:22.991 }, 00:06:22.991 "blobfs": { 00:06:22.991 "mask": "0x80", 00:06:22.991 "tpoint_mask": "0x0" 00:06:22.991 }, 00:06:22.991 "dsa": { 00:06:22.991 "mask": "0x200", 00:06:22.991 "tpoint_mask": "0x0" 00:06:22.991 }, 00:06:22.991 "thread": { 00:06:22.991 "mask": "0x400", 00:06:22.991 "tpoint_mask": "0x0" 00:06:22.991 }, 00:06:22.991 "nvme_pcie": { 00:06:22.991 "mask": "0x800", 00:06:22.991 "tpoint_mask": "0x0" 00:06:22.991 }, 00:06:22.991 "iaa": { 00:06:22.991 "mask": "0x1000", 00:06:22.991 "tpoint_mask": "0x0" 00:06:22.991 }, 00:06:22.991 "nvme_tcp": { 00:06:22.991 "mask": "0x2000", 00:06:22.991 "tpoint_mask": "0x0" 00:06:22.991 }, 00:06:22.991 "bdev_nvme": { 00:06:22.991 "mask": "0x4000", 00:06:22.991 "tpoint_mask": "0x0" 00:06:22.991 }, 00:06:22.991 "sock": { 00:06:22.991 "mask": "0x8000", 00:06:22.991 "tpoint_mask": "0x0" 00:06:22.991 }, 00:06:22.991 "blob": { 00:06:22.991 "mask": "0x10000", 00:06:22.991 "tpoint_mask": "0x0" 00:06:22.991 }, 00:06:22.991 "bdev_raid": { 00:06:22.991 "mask": "0x20000", 00:06:22.991 "tpoint_mask": "0x0" 00:06:22.991 } 00:06:22.991 }' 00:06:22.991 05:58:48 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:06:22.991 05:58:48 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 18 -gt 2 ']' 00:06:22.991 05:58:48 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:06:22.991 05:58:48 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:06:22.991 05:58:48 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:06:23.251 05:58:48 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:06:23.251 05:58:48 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:06:23.251 05:58:48 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:06:23.251 05:58:48 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:06:23.251 05:58:48 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:06:23.251 00:06:23.251 real 0m0.276s 00:06:23.251 user 0m0.233s 00:06:23.251 sys 0m0.029s 00:06:23.251 05:58:48 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:23.251 05:58:48 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:23.251 ************************************ 00:06:23.251 END TEST rpc_trace_cmd_test 00:06:23.251 ************************************ 00:06:23.251 05:58:48 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:06:23.251 05:58:48 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:06:23.251 05:58:48 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:06:23.251 05:58:48 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:23.251 05:58:48 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:23.251 05:58:48 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:23.251 ************************************ 00:06:23.251 START TEST rpc_daemon_integrity 00:06:23.251 ************************************ 00:06:23.251 05:58:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:06:23.251 05:58:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:23.251 05:58:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:23.251 05:58:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:23.251 05:58:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:23.251 05:58:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:23.251 05:58:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:23.251 05:58:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:23.251 05:58:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:23.251 05:58:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:23.251 05:58:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:23.251 05:58:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:23.251 05:58:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:06:23.251 05:58:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:23.251 05:58:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:23.251 05:58:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:23.511 05:58:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:23.511 05:58:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:23.511 { 00:06:23.511 "name": "Malloc2", 00:06:23.511 "aliases": [ 00:06:23.511 "7fcce0ae-429a-4739-b74b-b683ef1e1469" 00:06:23.511 ], 00:06:23.511 "product_name": "Malloc disk", 00:06:23.511 "block_size": 512, 00:06:23.511 "num_blocks": 16384, 00:06:23.511 "uuid": "7fcce0ae-429a-4739-b74b-b683ef1e1469", 00:06:23.511 "assigned_rate_limits": { 00:06:23.511 "rw_ios_per_sec": 0, 00:06:23.511 "rw_mbytes_per_sec": 0, 00:06:23.511 "r_mbytes_per_sec": 0, 00:06:23.511 "w_mbytes_per_sec": 0 00:06:23.511 }, 00:06:23.511 "claimed": false, 00:06:23.511 "zoned": false, 00:06:23.511 "supported_io_types": { 00:06:23.511 "read": true, 00:06:23.511 "write": true, 00:06:23.511 "unmap": true, 00:06:23.511 "flush": true, 00:06:23.511 "reset": true, 00:06:23.511 "nvme_admin": false, 00:06:23.511 "nvme_io": false, 00:06:23.511 "nvme_io_md": false, 00:06:23.511 "write_zeroes": true, 00:06:23.511 "zcopy": true, 00:06:23.511 "get_zone_info": false, 00:06:23.511 "zone_management": false, 00:06:23.511 "zone_append": false, 00:06:23.511 "compare": false, 00:06:23.511 "compare_and_write": false, 00:06:23.511 "abort": true, 00:06:23.511 "seek_hole": false, 00:06:23.511 "seek_data": false, 00:06:23.511 "copy": true, 00:06:23.511 "nvme_iov_md": false 00:06:23.511 }, 00:06:23.511 "memory_domains": [ 00:06:23.511 { 00:06:23.511 "dma_device_id": "system", 00:06:23.511 "dma_device_type": 1 00:06:23.511 }, 00:06:23.511 { 00:06:23.511 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:23.511 "dma_device_type": 2 00:06:23.511 } 00:06:23.511 ], 00:06:23.511 "driver_specific": {} 00:06:23.511 } 00:06:23.511 ]' 00:06:23.511 05:58:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:23.511 05:58:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:23.511 05:58:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:06:23.511 05:58:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:23.511 05:58:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:23.511 [2024-10-01 05:58:48.934269] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:06:23.511 [2024-10-01 05:58:48.934352] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:23.511 [2024-10-01 05:58:48.934379] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xee74b0 00:06:23.511 [2024-10-01 05:58:48.934393] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:23.511 [2024-10-01 05:58:48.936249] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:23.511 [2024-10-01 05:58:48.936302] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:23.511 Passthru0 00:06:23.511 05:58:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:23.511 05:58:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:23.511 05:58:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:23.511 05:58:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:23.511 05:58:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:23.511 05:58:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:23.511 { 00:06:23.511 "name": "Malloc2", 00:06:23.511 "aliases": [ 00:06:23.511 "7fcce0ae-429a-4739-b74b-b683ef1e1469" 00:06:23.511 ], 00:06:23.511 "product_name": "Malloc disk", 00:06:23.511 "block_size": 512, 00:06:23.511 "num_blocks": 16384, 00:06:23.511 "uuid": "7fcce0ae-429a-4739-b74b-b683ef1e1469", 00:06:23.511 "assigned_rate_limits": { 00:06:23.511 "rw_ios_per_sec": 0, 00:06:23.511 "rw_mbytes_per_sec": 0, 00:06:23.511 "r_mbytes_per_sec": 0, 00:06:23.511 "w_mbytes_per_sec": 0 00:06:23.511 }, 00:06:23.511 "claimed": true, 00:06:23.511 "claim_type": "exclusive_write", 00:06:23.511 "zoned": false, 00:06:23.511 "supported_io_types": { 00:06:23.511 "read": true, 00:06:23.511 "write": true, 00:06:23.511 "unmap": true, 00:06:23.511 "flush": true, 00:06:23.511 "reset": true, 00:06:23.511 "nvme_admin": false, 00:06:23.511 "nvme_io": false, 00:06:23.511 "nvme_io_md": false, 00:06:23.511 "write_zeroes": true, 00:06:23.511 "zcopy": true, 00:06:23.511 "get_zone_info": false, 00:06:23.511 "zone_management": false, 00:06:23.511 "zone_append": false, 00:06:23.511 "compare": false, 00:06:23.511 "compare_and_write": false, 00:06:23.511 "abort": true, 00:06:23.511 "seek_hole": false, 00:06:23.511 "seek_data": false, 00:06:23.511 "copy": true, 00:06:23.511 "nvme_iov_md": false 00:06:23.511 }, 00:06:23.511 "memory_domains": [ 00:06:23.511 { 00:06:23.511 "dma_device_id": "system", 00:06:23.511 "dma_device_type": 1 00:06:23.511 }, 00:06:23.511 { 00:06:23.511 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:23.511 "dma_device_type": 2 00:06:23.511 } 00:06:23.511 ], 00:06:23.511 "driver_specific": {} 00:06:23.511 }, 00:06:23.511 { 00:06:23.511 "name": "Passthru0", 00:06:23.511 "aliases": [ 00:06:23.511 "b0601288-b32c-54d8-9b24-1940c86cd9cc" 00:06:23.511 ], 00:06:23.511 "product_name": "passthru", 00:06:23.511 "block_size": 512, 00:06:23.511 "num_blocks": 16384, 00:06:23.511 "uuid": "b0601288-b32c-54d8-9b24-1940c86cd9cc", 00:06:23.511 "assigned_rate_limits": { 00:06:23.511 "rw_ios_per_sec": 0, 00:06:23.511 "rw_mbytes_per_sec": 0, 00:06:23.511 "r_mbytes_per_sec": 0, 00:06:23.511 "w_mbytes_per_sec": 0 00:06:23.511 }, 00:06:23.511 "claimed": false, 00:06:23.511 "zoned": false, 00:06:23.511 "supported_io_types": { 00:06:23.511 "read": true, 00:06:23.511 "write": true, 00:06:23.511 "unmap": true, 00:06:23.511 "flush": true, 00:06:23.511 "reset": true, 00:06:23.511 "nvme_admin": false, 00:06:23.511 "nvme_io": false, 00:06:23.511 "nvme_io_md": false, 00:06:23.511 "write_zeroes": true, 00:06:23.511 "zcopy": true, 00:06:23.511 "get_zone_info": false, 00:06:23.511 "zone_management": false, 00:06:23.511 "zone_append": false, 00:06:23.511 "compare": false, 00:06:23.511 "compare_and_write": false, 00:06:23.511 "abort": true, 00:06:23.511 "seek_hole": false, 00:06:23.511 "seek_data": false, 00:06:23.511 "copy": true, 00:06:23.511 "nvme_iov_md": false 00:06:23.511 }, 00:06:23.511 "memory_domains": [ 00:06:23.511 { 00:06:23.511 "dma_device_id": "system", 00:06:23.511 "dma_device_type": 1 00:06:23.511 }, 00:06:23.511 { 00:06:23.511 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:23.511 "dma_device_type": 2 00:06:23.511 } 00:06:23.511 ], 00:06:23.511 "driver_specific": { 00:06:23.511 "passthru": { 00:06:23.511 "name": "Passthru0", 00:06:23.511 "base_bdev_name": "Malloc2" 00:06:23.511 } 00:06:23.512 } 00:06:23.512 } 00:06:23.512 ]' 00:06:23.512 05:58:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:23.512 05:58:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:23.512 05:58:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:23.512 05:58:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:23.512 05:58:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:23.512 05:58:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:23.512 05:58:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:06:23.512 05:58:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:23.512 05:58:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:23.512 05:58:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:23.512 05:58:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:23.512 05:58:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:23.512 05:58:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:23.512 05:58:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:23.512 05:58:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:23.512 05:58:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:23.512 05:58:49 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:23.512 00:06:23.512 real 0m0.325s 00:06:23.512 user 0m0.218s 00:06:23.512 sys 0m0.037s 00:06:23.512 05:58:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:23.512 05:58:49 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:23.512 ************************************ 00:06:23.512 END TEST rpc_daemon_integrity 00:06:23.512 ************************************ 00:06:23.772 05:58:49 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:06:23.772 05:58:49 rpc -- rpc/rpc.sh@84 -- # killprocess 68955 00:06:23.772 05:58:49 rpc -- common/autotest_common.sh@950 -- # '[' -z 68955 ']' 00:06:23.772 05:58:49 rpc -- common/autotest_common.sh@954 -- # kill -0 68955 00:06:23.772 05:58:49 rpc -- common/autotest_common.sh@955 -- # uname 00:06:23.772 05:58:49 rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:23.772 05:58:49 rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 68955 00:06:23.772 05:58:49 rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:23.772 05:58:49 rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:23.772 05:58:49 rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 68955' 00:06:23.772 killing process with pid 68955 00:06:23.772 05:58:49 rpc -- common/autotest_common.sh@969 -- # kill 68955 00:06:23.772 05:58:49 rpc -- common/autotest_common.sh@974 -- # wait 68955 00:06:24.031 00:06:24.031 real 0m2.169s 00:06:24.031 user 0m2.929s 00:06:24.031 sys 0m0.583s 00:06:24.031 05:58:49 rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:24.031 05:58:49 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:24.031 ************************************ 00:06:24.031 END TEST rpc 00:06:24.031 ************************************ 00:06:24.031 05:58:49 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:06:24.031 05:58:49 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:24.031 05:58:49 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:24.031 05:58:49 -- common/autotest_common.sh@10 -- # set +x 00:06:24.031 ************************************ 00:06:24.031 START TEST skip_rpc 00:06:24.031 ************************************ 00:06:24.031 05:58:49 skip_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:06:24.031 * Looking for test storage... 00:06:24.031 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:06:24.031 05:58:49 skip_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:24.031 05:58:49 skip_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:06:24.031 05:58:49 skip_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:24.291 05:58:49 skip_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:24.291 05:58:49 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:24.291 05:58:49 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:24.291 05:58:49 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:24.291 05:58:49 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:24.291 05:58:49 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:24.291 05:58:49 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:24.291 05:58:49 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:24.291 05:58:49 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:24.291 05:58:49 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:24.291 05:58:49 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:24.291 05:58:49 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:24.291 05:58:49 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:24.291 05:58:49 skip_rpc -- scripts/common.sh@345 -- # : 1 00:06:24.291 05:58:49 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:24.291 05:58:49 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:24.291 05:58:49 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:06:24.291 05:58:49 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:06:24.291 05:58:49 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:24.291 05:58:49 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:06:24.291 05:58:49 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:24.291 05:58:49 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:06:24.291 05:58:49 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:06:24.291 05:58:49 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:24.291 05:58:49 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:06:24.291 05:58:49 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:24.291 05:58:49 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:24.291 05:58:49 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:24.291 05:58:49 skip_rpc -- scripts/common.sh@368 -- # return 0 00:06:24.291 05:58:49 skip_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:24.291 05:58:49 skip_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:24.291 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.291 --rc genhtml_branch_coverage=1 00:06:24.291 --rc genhtml_function_coverage=1 00:06:24.291 --rc genhtml_legend=1 00:06:24.291 --rc geninfo_all_blocks=1 00:06:24.291 --rc geninfo_unexecuted_blocks=1 00:06:24.291 00:06:24.291 ' 00:06:24.291 05:58:49 skip_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:24.291 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.291 --rc genhtml_branch_coverage=1 00:06:24.291 --rc genhtml_function_coverage=1 00:06:24.291 --rc genhtml_legend=1 00:06:24.291 --rc geninfo_all_blocks=1 00:06:24.291 --rc geninfo_unexecuted_blocks=1 00:06:24.291 00:06:24.291 ' 00:06:24.291 05:58:49 skip_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:24.291 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.291 --rc genhtml_branch_coverage=1 00:06:24.291 --rc genhtml_function_coverage=1 00:06:24.291 --rc genhtml_legend=1 00:06:24.291 --rc geninfo_all_blocks=1 00:06:24.291 --rc geninfo_unexecuted_blocks=1 00:06:24.291 00:06:24.291 ' 00:06:24.291 05:58:49 skip_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:24.291 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.291 --rc genhtml_branch_coverage=1 00:06:24.291 --rc genhtml_function_coverage=1 00:06:24.291 --rc genhtml_legend=1 00:06:24.291 --rc geninfo_all_blocks=1 00:06:24.291 --rc geninfo_unexecuted_blocks=1 00:06:24.291 00:06:24.291 ' 00:06:24.291 05:58:49 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:24.291 05:58:49 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:24.291 05:58:49 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:06:24.291 05:58:49 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:24.291 05:58:49 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:24.291 05:58:49 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:24.291 ************************************ 00:06:24.291 START TEST skip_rpc 00:06:24.291 ************************************ 00:06:24.291 05:58:49 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # test_skip_rpc 00:06:24.291 05:58:49 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=69148 00:06:24.291 05:58:49 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:06:24.291 05:58:49 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:24.291 05:58:49 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:06:24.291 [2024-10-01 05:58:49.746507] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:06:24.291 [2024-10-01 05:58:49.746813] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69148 ] 00:06:24.291 [2024-10-01 05:58:49.885332] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.551 [2024-10-01 05:58:49.924477] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.551 [2024-10-01 05:58:49.963099] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:29.823 05:58:54 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:06:29.823 05:58:54 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:06:29.823 05:58:54 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:06:29.823 05:58:54 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:06:29.823 05:58:54 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:29.823 05:58:54 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:06:29.823 05:58:54 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:29.823 05:58:54 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:06:29.823 05:58:54 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:29.823 05:58:54 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:29.823 05:58:54 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:29.823 05:58:54 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:06:29.823 05:58:54 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:29.823 05:58:54 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:29.823 05:58:54 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:29.823 05:58:54 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:06:29.823 05:58:54 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 69148 00:06:29.823 05:58:54 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # '[' -z 69148 ']' 00:06:29.823 05:58:54 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # kill -0 69148 00:06:29.823 05:58:54 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # uname 00:06:29.823 05:58:54 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:29.823 05:58:54 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69148 00:06:29.823 05:58:54 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:29.823 05:58:54 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:29.823 05:58:54 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69148' 00:06:29.823 killing process with pid 69148 00:06:29.823 05:58:54 skip_rpc.skip_rpc -- common/autotest_common.sh@969 -- # kill 69148 00:06:29.823 05:58:54 skip_rpc.skip_rpc -- common/autotest_common.sh@974 -- # wait 69148 00:06:29.823 00:06:29.823 real 0m5.302s 00:06:29.823 user 0m5.009s 00:06:29.823 sys 0m0.205s 00:06:29.823 ************************************ 00:06:29.823 END TEST skip_rpc 00:06:29.823 ************************************ 00:06:29.823 05:58:54 skip_rpc.skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:29.823 05:58:54 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:29.823 05:58:55 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:06:29.823 05:58:55 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:29.823 05:58:55 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:29.823 05:58:55 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:29.823 ************************************ 00:06:29.823 START TEST skip_rpc_with_json 00:06:29.823 ************************************ 00:06:29.823 05:58:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_json 00:06:29.823 05:58:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:06:29.823 05:58:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=69235 00:06:29.823 05:58:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:29.823 05:58:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:29.823 05:58:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 69235 00:06:29.823 05:58:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # '[' -z 69235 ']' 00:06:29.823 05:58:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:29.823 05:58:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:29.823 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:29.823 05:58:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:29.823 05:58:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:29.823 05:58:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:29.823 [2024-10-01 05:58:55.116456] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:06:29.823 [2024-10-01 05:58:55.116604] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69235 ] 00:06:29.823 [2024-10-01 05:58:55.254362] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.823 [2024-10-01 05:58:55.290035] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.823 [2024-10-01 05:58:55.329991] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:30.083 05:58:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:30.083 05:58:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # return 0 00:06:30.083 05:58:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:06:30.083 05:58:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:30.083 05:58:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:30.083 [2024-10-01 05:58:55.454471] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:06:30.083 request: 00:06:30.083 { 00:06:30.083 "trtype": "tcp", 00:06:30.083 "method": "nvmf_get_transports", 00:06:30.083 "req_id": 1 00:06:30.083 } 00:06:30.083 Got JSON-RPC error response 00:06:30.083 response: 00:06:30.083 { 00:06:30.083 "code": -19, 00:06:30.083 "message": "No such device" 00:06:30.083 } 00:06:30.083 05:58:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:30.083 05:58:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:06:30.083 05:58:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:30.083 05:58:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:30.083 [2024-10-01 05:58:55.466563] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:30.083 05:58:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:30.083 05:58:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:06:30.083 05:58:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:30.083 05:58:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:30.083 05:58:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:30.083 05:58:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:30.083 { 00:06:30.083 "subsystems": [ 00:06:30.083 { 00:06:30.083 "subsystem": "fsdev", 00:06:30.083 "config": [ 00:06:30.083 { 00:06:30.083 "method": "fsdev_set_opts", 00:06:30.083 "params": { 00:06:30.083 "fsdev_io_pool_size": 65535, 00:06:30.083 "fsdev_io_cache_size": 256 00:06:30.083 } 00:06:30.083 } 00:06:30.083 ] 00:06:30.083 }, 00:06:30.083 { 00:06:30.083 "subsystem": "keyring", 00:06:30.083 "config": [] 00:06:30.083 }, 00:06:30.083 { 00:06:30.083 "subsystem": "iobuf", 00:06:30.083 "config": [ 00:06:30.083 { 00:06:30.083 "method": "iobuf_set_options", 00:06:30.083 "params": { 00:06:30.083 "small_pool_count": 8192, 00:06:30.083 "large_pool_count": 1024, 00:06:30.083 "small_bufsize": 8192, 00:06:30.083 "large_bufsize": 135168 00:06:30.083 } 00:06:30.083 } 00:06:30.083 ] 00:06:30.083 }, 00:06:30.083 { 00:06:30.083 "subsystem": "sock", 00:06:30.083 "config": [ 00:06:30.083 { 00:06:30.083 "method": "sock_set_default_impl", 00:06:30.083 "params": { 00:06:30.083 "impl_name": "uring" 00:06:30.083 } 00:06:30.083 }, 00:06:30.083 { 00:06:30.083 "method": "sock_impl_set_options", 00:06:30.083 "params": { 00:06:30.083 "impl_name": "ssl", 00:06:30.083 "recv_buf_size": 4096, 00:06:30.083 "send_buf_size": 4096, 00:06:30.083 "enable_recv_pipe": true, 00:06:30.083 "enable_quickack": false, 00:06:30.083 "enable_placement_id": 0, 00:06:30.083 "enable_zerocopy_send_server": true, 00:06:30.083 "enable_zerocopy_send_client": false, 00:06:30.083 "zerocopy_threshold": 0, 00:06:30.083 "tls_version": 0, 00:06:30.083 "enable_ktls": false 00:06:30.083 } 00:06:30.083 }, 00:06:30.083 { 00:06:30.083 "method": "sock_impl_set_options", 00:06:30.083 "params": { 00:06:30.083 "impl_name": "posix", 00:06:30.083 "recv_buf_size": 2097152, 00:06:30.083 "send_buf_size": 2097152, 00:06:30.084 "enable_recv_pipe": true, 00:06:30.084 "enable_quickack": false, 00:06:30.084 "enable_placement_id": 0, 00:06:30.084 "enable_zerocopy_send_server": true, 00:06:30.084 "enable_zerocopy_send_client": false, 00:06:30.084 "zerocopy_threshold": 0, 00:06:30.084 "tls_version": 0, 00:06:30.084 "enable_ktls": false 00:06:30.084 } 00:06:30.084 }, 00:06:30.084 { 00:06:30.084 "method": "sock_impl_set_options", 00:06:30.084 "params": { 00:06:30.084 "impl_name": "uring", 00:06:30.084 "recv_buf_size": 2097152, 00:06:30.084 "send_buf_size": 2097152, 00:06:30.084 "enable_recv_pipe": true, 00:06:30.084 "enable_quickack": false, 00:06:30.084 "enable_placement_id": 0, 00:06:30.084 "enable_zerocopy_send_server": false, 00:06:30.084 "enable_zerocopy_send_client": false, 00:06:30.084 "zerocopy_threshold": 0, 00:06:30.084 "tls_version": 0, 00:06:30.084 "enable_ktls": false 00:06:30.084 } 00:06:30.084 } 00:06:30.084 ] 00:06:30.084 }, 00:06:30.084 { 00:06:30.084 "subsystem": "vmd", 00:06:30.084 "config": [] 00:06:30.084 }, 00:06:30.084 { 00:06:30.084 "subsystem": "accel", 00:06:30.084 "config": [ 00:06:30.084 { 00:06:30.084 "method": "accel_set_options", 00:06:30.084 "params": { 00:06:30.084 "small_cache_size": 128, 00:06:30.084 "large_cache_size": 16, 00:06:30.084 "task_count": 2048, 00:06:30.084 "sequence_count": 2048, 00:06:30.084 "buf_count": 2048 00:06:30.084 } 00:06:30.084 } 00:06:30.084 ] 00:06:30.084 }, 00:06:30.084 { 00:06:30.084 "subsystem": "bdev", 00:06:30.084 "config": [ 00:06:30.084 { 00:06:30.084 "method": "bdev_set_options", 00:06:30.084 "params": { 00:06:30.084 "bdev_io_pool_size": 65535, 00:06:30.084 "bdev_io_cache_size": 256, 00:06:30.084 "bdev_auto_examine": true, 00:06:30.084 "iobuf_small_cache_size": 128, 00:06:30.084 "iobuf_large_cache_size": 16 00:06:30.084 } 00:06:30.084 }, 00:06:30.084 { 00:06:30.084 "method": "bdev_raid_set_options", 00:06:30.084 "params": { 00:06:30.084 "process_window_size_kb": 1024, 00:06:30.084 "process_max_bandwidth_mb_sec": 0 00:06:30.084 } 00:06:30.084 }, 00:06:30.084 { 00:06:30.084 "method": "bdev_iscsi_set_options", 00:06:30.084 "params": { 00:06:30.084 "timeout_sec": 30 00:06:30.084 } 00:06:30.084 }, 00:06:30.084 { 00:06:30.084 "method": "bdev_nvme_set_options", 00:06:30.084 "params": { 00:06:30.084 "action_on_timeout": "none", 00:06:30.084 "timeout_us": 0, 00:06:30.084 "timeout_admin_us": 0, 00:06:30.084 "keep_alive_timeout_ms": 10000, 00:06:30.084 "arbitration_burst": 0, 00:06:30.084 "low_priority_weight": 0, 00:06:30.084 "medium_priority_weight": 0, 00:06:30.084 "high_priority_weight": 0, 00:06:30.084 "nvme_adminq_poll_period_us": 10000, 00:06:30.084 "nvme_ioq_poll_period_us": 0, 00:06:30.084 "io_queue_requests": 0, 00:06:30.084 "delay_cmd_submit": true, 00:06:30.084 "transport_retry_count": 4, 00:06:30.084 "bdev_retry_count": 3, 00:06:30.084 "transport_ack_timeout": 0, 00:06:30.084 "ctrlr_loss_timeout_sec": 0, 00:06:30.084 "reconnect_delay_sec": 0, 00:06:30.084 "fast_io_fail_timeout_sec": 0, 00:06:30.084 "disable_auto_failback": false, 00:06:30.084 "generate_uuids": false, 00:06:30.084 "transport_tos": 0, 00:06:30.084 "nvme_error_stat": false, 00:06:30.084 "rdma_srq_size": 0, 00:06:30.084 "io_path_stat": false, 00:06:30.084 "allow_accel_sequence": false, 00:06:30.084 "rdma_max_cq_size": 0, 00:06:30.084 "rdma_cm_event_timeout_ms": 0, 00:06:30.084 "dhchap_digests": [ 00:06:30.084 "sha256", 00:06:30.084 "sha384", 00:06:30.084 "sha512" 00:06:30.084 ], 00:06:30.084 "dhchap_dhgroups": [ 00:06:30.084 "null", 00:06:30.084 "ffdhe2048", 00:06:30.084 "ffdhe3072", 00:06:30.084 "ffdhe4096", 00:06:30.084 "ffdhe6144", 00:06:30.084 "ffdhe8192" 00:06:30.084 ] 00:06:30.084 } 00:06:30.084 }, 00:06:30.084 { 00:06:30.084 "method": "bdev_nvme_set_hotplug", 00:06:30.084 "params": { 00:06:30.084 "period_us": 100000, 00:06:30.084 "enable": false 00:06:30.084 } 00:06:30.084 }, 00:06:30.084 { 00:06:30.084 "method": "bdev_wait_for_examine" 00:06:30.084 } 00:06:30.084 ] 00:06:30.084 }, 00:06:30.084 { 00:06:30.084 "subsystem": "scsi", 00:06:30.084 "config": null 00:06:30.084 }, 00:06:30.084 { 00:06:30.084 "subsystem": "scheduler", 00:06:30.084 "config": [ 00:06:30.084 { 00:06:30.084 "method": "framework_set_scheduler", 00:06:30.084 "params": { 00:06:30.084 "name": "static" 00:06:30.084 } 00:06:30.084 } 00:06:30.084 ] 00:06:30.084 }, 00:06:30.084 { 00:06:30.084 "subsystem": "vhost_scsi", 00:06:30.084 "config": [] 00:06:30.084 }, 00:06:30.084 { 00:06:30.084 "subsystem": "vhost_blk", 00:06:30.084 "config": [] 00:06:30.084 }, 00:06:30.084 { 00:06:30.084 "subsystem": "ublk", 00:06:30.084 "config": [] 00:06:30.084 }, 00:06:30.084 { 00:06:30.084 "subsystem": "nbd", 00:06:30.084 "config": [] 00:06:30.084 }, 00:06:30.084 { 00:06:30.084 "subsystem": "nvmf", 00:06:30.084 "config": [ 00:06:30.084 { 00:06:30.084 "method": "nvmf_set_config", 00:06:30.084 "params": { 00:06:30.084 "discovery_filter": "match_any", 00:06:30.084 "admin_cmd_passthru": { 00:06:30.084 "identify_ctrlr": false 00:06:30.084 }, 00:06:30.084 "dhchap_digests": [ 00:06:30.084 "sha256", 00:06:30.084 "sha384", 00:06:30.084 "sha512" 00:06:30.084 ], 00:06:30.084 "dhchap_dhgroups": [ 00:06:30.084 "null", 00:06:30.084 "ffdhe2048", 00:06:30.084 "ffdhe3072", 00:06:30.084 "ffdhe4096", 00:06:30.084 "ffdhe6144", 00:06:30.084 "ffdhe8192" 00:06:30.084 ] 00:06:30.084 } 00:06:30.084 }, 00:06:30.084 { 00:06:30.084 "method": "nvmf_set_max_subsystems", 00:06:30.084 "params": { 00:06:30.084 "max_subsystems": 1024 00:06:30.084 } 00:06:30.084 }, 00:06:30.084 { 00:06:30.084 "method": "nvmf_set_crdt", 00:06:30.084 "params": { 00:06:30.084 "crdt1": 0, 00:06:30.084 "crdt2": 0, 00:06:30.084 "crdt3": 0 00:06:30.084 } 00:06:30.084 }, 00:06:30.084 { 00:06:30.084 "method": "nvmf_create_transport", 00:06:30.084 "params": { 00:06:30.084 "trtype": "TCP", 00:06:30.084 "max_queue_depth": 128, 00:06:30.084 "max_io_qpairs_per_ctrlr": 127, 00:06:30.084 "in_capsule_data_size": 4096, 00:06:30.084 "max_io_size": 131072, 00:06:30.084 "io_unit_size": 131072, 00:06:30.084 "max_aq_depth": 128, 00:06:30.084 "num_shared_buffers": 511, 00:06:30.084 "buf_cache_size": 4294967295, 00:06:30.084 "dif_insert_or_strip": false, 00:06:30.084 "zcopy": false, 00:06:30.084 "c2h_success": true, 00:06:30.084 "sock_priority": 0, 00:06:30.084 "abort_timeout_sec": 1, 00:06:30.084 "ack_timeout": 0, 00:06:30.084 "data_wr_pool_size": 0 00:06:30.084 } 00:06:30.084 } 00:06:30.084 ] 00:06:30.084 }, 00:06:30.084 { 00:06:30.084 "subsystem": "iscsi", 00:06:30.084 "config": [ 00:06:30.084 { 00:06:30.084 "method": "iscsi_set_options", 00:06:30.084 "params": { 00:06:30.084 "node_base": "iqn.2016-06.io.spdk", 00:06:30.084 "max_sessions": 128, 00:06:30.084 "max_connections_per_session": 2, 00:06:30.084 "max_queue_depth": 64, 00:06:30.084 "default_time2wait": 2, 00:06:30.084 "default_time2retain": 20, 00:06:30.084 "first_burst_length": 8192, 00:06:30.084 "immediate_data": true, 00:06:30.084 "allow_duplicated_isid": false, 00:06:30.084 "error_recovery_level": 0, 00:06:30.084 "nop_timeout": 60, 00:06:30.084 "nop_in_interval": 30, 00:06:30.084 "disable_chap": false, 00:06:30.084 "require_chap": false, 00:06:30.084 "mutual_chap": false, 00:06:30.084 "chap_group": 0, 00:06:30.084 "max_large_datain_per_connection": 64, 00:06:30.084 "max_r2t_per_connection": 4, 00:06:30.084 "pdu_pool_size": 36864, 00:06:30.084 "immediate_data_pool_size": 16384, 00:06:30.084 "data_out_pool_size": 2048 00:06:30.084 } 00:06:30.084 } 00:06:30.084 ] 00:06:30.084 } 00:06:30.084 ] 00:06:30.084 } 00:06:30.084 05:58:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:06:30.084 05:58:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 69235 00:06:30.084 05:58:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 69235 ']' 00:06:30.084 05:58:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 69235 00:06:30.084 05:58:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:06:30.084 05:58:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:30.084 05:58:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69235 00:06:30.084 05:58:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:30.084 killing process with pid 69235 00:06:30.084 05:58:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:30.084 05:58:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69235' 00:06:30.084 05:58:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 69235 00:06:30.084 05:58:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 69235 00:06:30.343 05:58:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=69255 00:06:30.343 05:58:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:30.343 05:58:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:06:35.614 05:59:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 69255 00:06:35.614 05:59:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 69255 ']' 00:06:35.614 05:59:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 69255 00:06:35.614 05:59:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:06:35.614 05:59:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:35.614 05:59:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69255 00:06:35.614 05:59:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:35.614 killing process with pid 69255 00:06:35.614 05:59:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:35.614 05:59:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69255' 00:06:35.614 05:59:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 69255 00:06:35.614 05:59:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 69255 00:06:35.614 05:59:01 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:35.614 05:59:01 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:35.614 00:06:35.614 real 0m6.181s 00:06:35.614 user 0m5.903s 00:06:35.614 sys 0m0.469s 00:06:35.614 ************************************ 00:06:35.614 END TEST skip_rpc_with_json 00:06:35.614 ************************************ 00:06:35.614 05:59:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:35.614 05:59:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:35.874 05:59:01 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:06:35.874 05:59:01 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:35.874 05:59:01 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:35.874 05:59:01 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:35.874 ************************************ 00:06:35.874 START TEST skip_rpc_with_delay 00:06:35.874 ************************************ 00:06:35.874 05:59:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_delay 00:06:35.874 05:59:01 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:35.874 05:59:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:06:35.874 05:59:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:35.874 05:59:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:35.874 05:59:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:35.874 05:59:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:35.874 05:59:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:35.874 05:59:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:35.874 05:59:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:35.874 05:59:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:35.874 05:59:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:06:35.874 05:59:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:35.874 [2024-10-01 05:59:01.314985] app.c: 840:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:06:35.874 [2024-10-01 05:59:01.315074] app.c: 719:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:06:35.874 05:59:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:06:35.874 05:59:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:35.874 05:59:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:35.874 05:59:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:35.874 00:06:35.874 real 0m0.068s 00:06:35.874 user 0m0.043s 00:06:35.874 sys 0m0.024s 00:06:35.874 05:59:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:35.874 05:59:01 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:06:35.874 ************************************ 00:06:35.874 END TEST skip_rpc_with_delay 00:06:35.874 ************************************ 00:06:35.874 05:59:01 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:06:35.874 05:59:01 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:06:35.874 05:59:01 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:06:35.874 05:59:01 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:35.874 05:59:01 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:35.874 05:59:01 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:35.874 ************************************ 00:06:35.874 START TEST exit_on_failed_rpc_init 00:06:35.874 ************************************ 00:06:35.874 05:59:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # test_exit_on_failed_rpc_init 00:06:35.874 05:59:01 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=69359 00:06:35.874 05:59:01 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:35.874 05:59:01 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 69359 00:06:35.874 05:59:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # '[' -z 69359 ']' 00:06:35.875 05:59:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:35.875 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:35.875 05:59:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:35.875 05:59:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:35.875 05:59:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:35.875 05:59:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:35.875 [2024-10-01 05:59:01.432560] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:06:35.875 [2024-10-01 05:59:01.432663] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69359 ] 00:06:36.134 [2024-10-01 05:59:01.565478] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.134 [2024-10-01 05:59:01.599336] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.134 [2024-10-01 05:59:01.634637] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:36.394 05:59:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:36.394 05:59:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # return 0 00:06:36.394 05:59:01 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:36.394 05:59:01 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:36.394 05:59:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:06:36.394 05:59:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:36.394 05:59:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:36.394 05:59:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:36.394 05:59:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:36.394 05:59:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:36.394 05:59:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:36.394 05:59:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:36.394 05:59:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:36.394 05:59:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:06:36.394 05:59:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:36.394 [2024-10-01 05:59:01.830281] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:06:36.394 [2024-10-01 05:59:01.830381] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69369 ] 00:06:36.394 [2024-10-01 05:59:01.971260] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.654 [2024-10-01 05:59:02.012303] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:36.654 [2024-10-01 05:59:02.012403] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:06:36.654 [2024-10-01 05:59:02.012420] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:06:36.654 [2024-10-01 05:59:02.012430] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:36.654 05:59:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:06:36.654 05:59:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:36.654 05:59:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:06:36.654 05:59:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:06:36.654 05:59:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:06:36.654 05:59:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:36.654 05:59:02 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:36.654 05:59:02 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 69359 00:06:36.654 05:59:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # '[' -z 69359 ']' 00:06:36.654 05:59:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # kill -0 69359 00:06:36.654 05:59:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # uname 00:06:36.654 05:59:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:36.654 05:59:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69359 00:06:36.654 05:59:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:36.654 killing process with pid 69359 00:06:36.654 05:59:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:36.654 05:59:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69359' 00:06:36.654 05:59:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@969 -- # kill 69359 00:06:36.654 05:59:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@974 -- # wait 69359 00:06:36.913 00:06:36.913 real 0m0.980s 00:06:36.913 user 0m1.146s 00:06:36.913 sys 0m0.294s 00:06:36.913 ************************************ 00:06:36.913 END TEST exit_on_failed_rpc_init 00:06:36.913 ************************************ 00:06:36.913 05:59:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:36.913 05:59:02 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:36.913 05:59:02 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:36.913 ************************************ 00:06:36.913 END TEST skip_rpc 00:06:36.913 ************************************ 00:06:36.913 00:06:36.913 real 0m12.929s 00:06:36.913 user 0m12.275s 00:06:36.913 sys 0m1.198s 00:06:36.913 05:59:02 skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:36.913 05:59:02 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:36.913 05:59:02 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:06:36.913 05:59:02 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:36.913 05:59:02 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:36.913 05:59:02 -- common/autotest_common.sh@10 -- # set +x 00:06:36.913 ************************************ 00:06:36.913 START TEST rpc_client 00:06:36.913 ************************************ 00:06:36.913 05:59:02 rpc_client -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:06:37.173 * Looking for test storage... 00:06:37.173 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:06:37.173 05:59:02 rpc_client -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:37.173 05:59:02 rpc_client -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:37.173 05:59:02 rpc_client -- common/autotest_common.sh@1681 -- # lcov --version 00:06:37.173 05:59:02 rpc_client -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:37.173 05:59:02 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:37.173 05:59:02 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:37.173 05:59:02 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:37.173 05:59:02 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:06:37.173 05:59:02 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:06:37.173 05:59:02 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:06:37.173 05:59:02 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:06:37.173 05:59:02 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:06:37.173 05:59:02 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:06:37.173 05:59:02 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:06:37.173 05:59:02 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:37.173 05:59:02 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:06:37.173 05:59:02 rpc_client -- scripts/common.sh@345 -- # : 1 00:06:37.173 05:59:02 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:37.173 05:59:02 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:37.173 05:59:02 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:06:37.173 05:59:02 rpc_client -- scripts/common.sh@353 -- # local d=1 00:06:37.173 05:59:02 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:37.173 05:59:02 rpc_client -- scripts/common.sh@355 -- # echo 1 00:06:37.173 05:59:02 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:06:37.173 05:59:02 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:06:37.173 05:59:02 rpc_client -- scripts/common.sh@353 -- # local d=2 00:06:37.173 05:59:02 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:37.173 05:59:02 rpc_client -- scripts/common.sh@355 -- # echo 2 00:06:37.173 05:59:02 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:06:37.173 05:59:02 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:37.173 05:59:02 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:37.173 05:59:02 rpc_client -- scripts/common.sh@368 -- # return 0 00:06:37.173 05:59:02 rpc_client -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:37.173 05:59:02 rpc_client -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:37.173 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:37.173 --rc genhtml_branch_coverage=1 00:06:37.173 --rc genhtml_function_coverage=1 00:06:37.173 --rc genhtml_legend=1 00:06:37.173 --rc geninfo_all_blocks=1 00:06:37.173 --rc geninfo_unexecuted_blocks=1 00:06:37.173 00:06:37.173 ' 00:06:37.173 05:59:02 rpc_client -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:37.173 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:37.173 --rc genhtml_branch_coverage=1 00:06:37.173 --rc genhtml_function_coverage=1 00:06:37.173 --rc genhtml_legend=1 00:06:37.173 --rc geninfo_all_blocks=1 00:06:37.173 --rc geninfo_unexecuted_blocks=1 00:06:37.173 00:06:37.173 ' 00:06:37.173 05:59:02 rpc_client -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:37.173 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:37.173 --rc genhtml_branch_coverage=1 00:06:37.173 --rc genhtml_function_coverage=1 00:06:37.173 --rc genhtml_legend=1 00:06:37.173 --rc geninfo_all_blocks=1 00:06:37.173 --rc geninfo_unexecuted_blocks=1 00:06:37.173 00:06:37.173 ' 00:06:37.173 05:59:02 rpc_client -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:37.173 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:37.173 --rc genhtml_branch_coverage=1 00:06:37.173 --rc genhtml_function_coverage=1 00:06:37.173 --rc genhtml_legend=1 00:06:37.173 --rc geninfo_all_blocks=1 00:06:37.173 --rc geninfo_unexecuted_blocks=1 00:06:37.173 00:06:37.173 ' 00:06:37.173 05:59:02 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:06:37.173 OK 00:06:37.173 05:59:02 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:06:37.173 00:06:37.173 real 0m0.227s 00:06:37.173 user 0m0.140s 00:06:37.173 sys 0m0.098s 00:06:37.173 05:59:02 rpc_client -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:37.173 05:59:02 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:06:37.173 ************************************ 00:06:37.173 END TEST rpc_client 00:06:37.173 ************************************ 00:06:37.173 05:59:02 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:06:37.173 05:59:02 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:37.173 05:59:02 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:37.173 05:59:02 -- common/autotest_common.sh@10 -- # set +x 00:06:37.173 ************************************ 00:06:37.173 START TEST json_config 00:06:37.173 ************************************ 00:06:37.173 05:59:02 json_config -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:06:37.173 05:59:02 json_config -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:37.434 05:59:02 json_config -- common/autotest_common.sh@1681 -- # lcov --version 00:06:37.434 05:59:02 json_config -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:37.434 05:59:02 json_config -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:37.434 05:59:02 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:37.434 05:59:02 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:37.434 05:59:02 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:37.434 05:59:02 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:06:37.434 05:59:02 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:06:37.434 05:59:02 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:06:37.434 05:59:02 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:06:37.434 05:59:02 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:06:37.434 05:59:02 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:06:37.434 05:59:02 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:06:37.434 05:59:02 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:37.434 05:59:02 json_config -- scripts/common.sh@344 -- # case "$op" in 00:06:37.434 05:59:02 json_config -- scripts/common.sh@345 -- # : 1 00:06:37.434 05:59:02 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:37.434 05:59:02 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:37.434 05:59:02 json_config -- scripts/common.sh@365 -- # decimal 1 00:06:37.434 05:59:02 json_config -- scripts/common.sh@353 -- # local d=1 00:06:37.434 05:59:02 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:37.434 05:59:02 json_config -- scripts/common.sh@355 -- # echo 1 00:06:37.434 05:59:02 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:06:37.434 05:59:02 json_config -- scripts/common.sh@366 -- # decimal 2 00:06:37.434 05:59:02 json_config -- scripts/common.sh@353 -- # local d=2 00:06:37.434 05:59:02 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:37.434 05:59:02 json_config -- scripts/common.sh@355 -- # echo 2 00:06:37.434 05:59:02 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:06:37.434 05:59:02 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:37.434 05:59:02 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:37.434 05:59:02 json_config -- scripts/common.sh@368 -- # return 0 00:06:37.434 05:59:02 json_config -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:37.434 05:59:02 json_config -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:37.434 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:37.434 --rc genhtml_branch_coverage=1 00:06:37.434 --rc genhtml_function_coverage=1 00:06:37.434 --rc genhtml_legend=1 00:06:37.434 --rc geninfo_all_blocks=1 00:06:37.434 --rc geninfo_unexecuted_blocks=1 00:06:37.434 00:06:37.434 ' 00:06:37.434 05:59:02 json_config -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:37.434 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:37.434 --rc genhtml_branch_coverage=1 00:06:37.434 --rc genhtml_function_coverage=1 00:06:37.434 --rc genhtml_legend=1 00:06:37.434 --rc geninfo_all_blocks=1 00:06:37.434 --rc geninfo_unexecuted_blocks=1 00:06:37.434 00:06:37.434 ' 00:06:37.434 05:59:02 json_config -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:37.434 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:37.434 --rc genhtml_branch_coverage=1 00:06:37.434 --rc genhtml_function_coverage=1 00:06:37.434 --rc genhtml_legend=1 00:06:37.434 --rc geninfo_all_blocks=1 00:06:37.434 --rc geninfo_unexecuted_blocks=1 00:06:37.434 00:06:37.434 ' 00:06:37.434 05:59:02 json_config -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:37.434 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:37.434 --rc genhtml_branch_coverage=1 00:06:37.434 --rc genhtml_function_coverage=1 00:06:37.434 --rc genhtml_legend=1 00:06:37.434 --rc geninfo_all_blocks=1 00:06:37.434 --rc geninfo_unexecuted_blocks=1 00:06:37.434 00:06:37.434 ' 00:06:37.434 05:59:02 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:37.434 05:59:02 json_config -- nvmf/common.sh@7 -- # uname -s 00:06:37.434 05:59:02 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:37.434 05:59:02 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:37.434 05:59:02 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:37.434 05:59:02 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:37.434 05:59:02 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:37.434 05:59:02 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:37.434 05:59:02 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:37.434 05:59:02 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:37.434 05:59:02 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:37.434 05:59:02 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:37.434 05:59:02 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 00:06:37.434 05:59:02 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=a979a798-a221-4879-b3c4-5aaa753fde06 00:06:37.434 05:59:02 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:37.434 05:59:02 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:37.434 05:59:02 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:37.434 05:59:02 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:37.434 05:59:02 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:37.434 05:59:02 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:06:37.434 05:59:02 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:37.434 05:59:02 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:37.434 05:59:02 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:37.434 05:59:02 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:37.434 05:59:02 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:37.434 05:59:02 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:37.434 05:59:02 json_config -- paths/export.sh@5 -- # export PATH 00:06:37.434 05:59:02 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:37.434 05:59:02 json_config -- nvmf/common.sh@51 -- # : 0 00:06:37.434 05:59:02 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:37.434 05:59:02 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:37.434 05:59:02 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:37.435 05:59:02 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:37.435 05:59:02 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:37.435 05:59:02 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:37.435 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:37.435 05:59:02 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:37.435 05:59:02 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:37.435 05:59:02 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:37.435 05:59:02 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:06:37.435 05:59:02 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:06:37.435 05:59:02 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:06:37.435 05:59:02 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:06:37.435 05:59:02 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:06:37.435 05:59:02 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:06:37.435 05:59:02 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:06:37.435 05:59:02 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:06:37.435 05:59:02 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:06:37.435 05:59:02 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:06:37.435 05:59:02 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:06:37.435 05:59:02 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:06:37.435 05:59:02 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:06:37.435 05:59:02 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:06:37.435 05:59:02 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:37.435 INFO: JSON configuration test init 00:06:37.435 05:59:02 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:06:37.435 05:59:02 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:06:37.435 05:59:02 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:06:37.435 05:59:02 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:37.435 05:59:02 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:37.435 05:59:02 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:06:37.435 05:59:02 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:37.435 05:59:02 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:37.435 05:59:02 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:06:37.435 05:59:02 json_config -- json_config/common.sh@9 -- # local app=target 00:06:37.435 05:59:02 json_config -- json_config/common.sh@10 -- # shift 00:06:37.435 05:59:02 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:37.435 05:59:02 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:37.435 05:59:02 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:06:37.435 05:59:02 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:37.435 05:59:02 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:37.435 05:59:02 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=69503 00:06:37.435 Waiting for target to run... 00:06:37.435 05:59:02 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:37.435 05:59:02 json_config -- json_config/common.sh@25 -- # waitforlisten 69503 /var/tmp/spdk_tgt.sock 00:06:37.435 05:59:02 json_config -- common/autotest_common.sh@831 -- # '[' -z 69503 ']' 00:06:37.435 05:59:02 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:37.435 05:59:02 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:37.435 05:59:02 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:06:37.435 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:37.435 05:59:02 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:37.435 05:59:02 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:37.435 05:59:02 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:37.435 [2024-10-01 05:59:02.993552] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:06:37.435 [2024-10-01 05:59:02.993678] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69503 ] 00:06:37.694 [2024-10-01 05:59:03.307627] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.952 [2024-10-01 05:59:03.329153] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.520 05:59:03 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:38.520 05:59:03 json_config -- common/autotest_common.sh@864 -- # return 0 00:06:38.520 00:06:38.520 05:59:03 json_config -- json_config/common.sh@26 -- # echo '' 00:06:38.520 05:59:03 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:06:38.520 05:59:03 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:06:38.520 05:59:03 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:38.520 05:59:03 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:38.520 05:59:03 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:06:38.520 05:59:03 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:06:38.520 05:59:03 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:38.520 05:59:03 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:38.520 05:59:04 json_config -- json_config/json_config.sh@280 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:06:38.520 05:59:04 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:06:38.520 05:59:04 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:06:38.780 [2024-10-01 05:59:04.327144] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:39.038 05:59:04 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:06:39.038 05:59:04 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:06:39.038 05:59:04 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:39.038 05:59:04 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:39.038 05:59:04 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:06:39.038 05:59:04 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:06:39.038 05:59:04 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:06:39.038 05:59:04 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:06:39.038 05:59:04 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:06:39.038 05:59:04 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:06:39.038 05:59:04 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:06:39.038 05:59:04 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:06:39.297 05:59:04 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:06:39.297 05:59:04 json_config -- json_config/json_config.sh@51 -- # local get_types 00:06:39.297 05:59:04 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:06:39.297 05:59:04 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:06:39.297 05:59:04 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:06:39.297 05:59:04 json_config -- json_config/json_config.sh@54 -- # sort 00:06:39.297 05:59:04 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:06:39.297 05:59:04 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:06:39.297 05:59:04 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:06:39.297 05:59:04 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:06:39.297 05:59:04 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:39.297 05:59:04 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:39.297 05:59:04 json_config -- json_config/json_config.sh@62 -- # return 0 00:06:39.297 05:59:04 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:06:39.297 05:59:04 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:06:39.297 05:59:04 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:06:39.297 05:59:04 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:06:39.297 05:59:04 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:06:39.297 05:59:04 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:06:39.297 05:59:04 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:39.297 05:59:04 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:39.297 05:59:04 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:06:39.297 05:59:04 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:06:39.297 05:59:04 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:06:39.297 05:59:04 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:39.297 05:59:04 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:39.556 MallocForNvmf0 00:06:39.556 05:59:05 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:39.556 05:59:05 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:40.124 MallocForNvmf1 00:06:40.124 05:59:05 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:06:40.124 05:59:05 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:06:40.124 [2024-10-01 05:59:05.677327] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:40.124 05:59:05 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:40.124 05:59:05 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:40.383 05:59:05 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:40.383 05:59:05 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:40.642 05:59:06 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:40.642 05:59:06 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:40.901 05:59:06 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:06:40.901 05:59:06 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:06:41.159 [2024-10-01 05:59:06.633846] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:41.159 05:59:06 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:06:41.159 05:59:06 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:41.159 05:59:06 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:41.159 05:59:06 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:06:41.159 05:59:06 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:41.159 05:59:06 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:41.159 05:59:06 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:06:41.159 05:59:06 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:41.159 05:59:06 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:41.417 MallocBdevForConfigChangeCheck 00:06:41.417 05:59:07 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:06:41.417 05:59:07 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:41.417 05:59:07 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:41.675 05:59:07 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:06:41.675 05:59:07 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:41.934 INFO: shutting down applications... 00:06:41.934 05:59:07 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:06:41.934 05:59:07 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:06:41.934 05:59:07 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:06:41.934 05:59:07 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:06:41.934 05:59:07 json_config -- json_config/json_config.sh@340 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:06:42.193 Calling clear_iscsi_subsystem 00:06:42.193 Calling clear_nvmf_subsystem 00:06:42.193 Calling clear_nbd_subsystem 00:06:42.193 Calling clear_ublk_subsystem 00:06:42.193 Calling clear_vhost_blk_subsystem 00:06:42.193 Calling clear_vhost_scsi_subsystem 00:06:42.193 Calling clear_bdev_subsystem 00:06:42.193 05:59:07 json_config -- json_config/json_config.sh@344 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:06:42.193 05:59:07 json_config -- json_config/json_config.sh@350 -- # count=100 00:06:42.193 05:59:07 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:06:42.193 05:59:07 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:42.193 05:59:07 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:06:42.193 05:59:07 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:06:42.760 05:59:08 json_config -- json_config/json_config.sh@352 -- # break 00:06:42.760 05:59:08 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:06:42.760 05:59:08 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:06:42.760 05:59:08 json_config -- json_config/common.sh@31 -- # local app=target 00:06:42.760 05:59:08 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:42.760 05:59:08 json_config -- json_config/common.sh@35 -- # [[ -n 69503 ]] 00:06:42.760 05:59:08 json_config -- json_config/common.sh@38 -- # kill -SIGINT 69503 00:06:42.760 05:59:08 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:42.760 05:59:08 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:42.760 05:59:08 json_config -- json_config/common.sh@41 -- # kill -0 69503 00:06:42.760 05:59:08 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:06:43.327 05:59:08 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:06:43.327 05:59:08 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:43.327 05:59:08 json_config -- json_config/common.sh@41 -- # kill -0 69503 00:06:43.327 05:59:08 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:43.327 05:59:08 json_config -- json_config/common.sh@43 -- # break 00:06:43.327 05:59:08 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:43.327 SPDK target shutdown done 00:06:43.327 05:59:08 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:43.327 INFO: relaunching applications... 00:06:43.327 05:59:08 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:06:43.327 05:59:08 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:43.327 05:59:08 json_config -- json_config/common.sh@9 -- # local app=target 00:06:43.327 05:59:08 json_config -- json_config/common.sh@10 -- # shift 00:06:43.327 05:59:08 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:43.327 05:59:08 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:43.327 05:59:08 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:06:43.327 05:59:08 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:43.327 05:59:08 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:43.327 05:59:08 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=69699 00:06:43.327 05:59:08 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:43.327 Waiting for target to run... 00:06:43.327 05:59:08 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:43.327 05:59:08 json_config -- json_config/common.sh@25 -- # waitforlisten 69699 /var/tmp/spdk_tgt.sock 00:06:43.327 05:59:08 json_config -- common/autotest_common.sh@831 -- # '[' -z 69699 ']' 00:06:43.327 05:59:08 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:43.327 05:59:08 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:43.327 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:43.327 05:59:08 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:43.327 05:59:08 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:43.327 05:59:08 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:43.327 [2024-10-01 05:59:08.751041] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:06:43.327 [2024-10-01 05:59:08.751156] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69699 ] 00:06:43.585 [2024-10-01 05:59:09.047436] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.585 [2024-10-01 05:59:09.069462] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.585 [2024-10-01 05:59:09.198537] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:43.844 [2024-10-01 05:59:09.390703] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:43.844 [2024-10-01 05:59:09.422809] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:44.102 05:59:09 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:44.102 05:59:09 json_config -- common/autotest_common.sh@864 -- # return 0 00:06:44.102 00:06:44.102 05:59:09 json_config -- json_config/common.sh@26 -- # echo '' 00:06:44.102 05:59:09 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:06:44.102 INFO: Checking if target configuration is the same... 00:06:44.102 05:59:09 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:06:44.102 05:59:09 json_config -- json_config/json_config.sh@385 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:44.102 05:59:09 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:06:44.102 05:59:09 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:44.102 + '[' 2 -ne 2 ']' 00:06:44.102 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:06:44.362 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:06:44.362 + rootdir=/home/vagrant/spdk_repo/spdk 00:06:44.362 +++ basename /dev/fd/62 00:06:44.362 ++ mktemp /tmp/62.XXX 00:06:44.362 + tmp_file_1=/tmp/62.XEj 00:06:44.362 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:44.362 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:44.362 + tmp_file_2=/tmp/spdk_tgt_config.json.9oo 00:06:44.362 + ret=0 00:06:44.362 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:06:44.621 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:06:44.621 + diff -u /tmp/62.XEj /tmp/spdk_tgt_config.json.9oo 00:06:44.621 INFO: JSON config files are the same 00:06:44.621 + echo 'INFO: JSON config files are the same' 00:06:44.621 + rm /tmp/62.XEj /tmp/spdk_tgt_config.json.9oo 00:06:44.622 INFO: changing configuration and checking if this can be detected... 00:06:44.622 + exit 0 00:06:44.622 05:59:10 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:06:44.622 05:59:10 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:06:44.622 05:59:10 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:44.622 05:59:10 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:44.881 05:59:10 json_config -- json_config/json_config.sh@394 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:44.881 05:59:10 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:06:44.881 05:59:10 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:44.881 + '[' 2 -ne 2 ']' 00:06:44.881 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:06:44.881 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:06:44.881 + rootdir=/home/vagrant/spdk_repo/spdk 00:06:44.881 +++ basename /dev/fd/62 00:06:44.881 ++ mktemp /tmp/62.XXX 00:06:44.881 + tmp_file_1=/tmp/62.n4D 00:06:44.881 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:44.881 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:44.881 + tmp_file_2=/tmp/spdk_tgt_config.json.zco 00:06:44.881 + ret=0 00:06:44.881 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:06:45.478 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:06:45.478 + diff -u /tmp/62.n4D /tmp/spdk_tgt_config.json.zco 00:06:45.478 + ret=1 00:06:45.478 + echo '=== Start of file: /tmp/62.n4D ===' 00:06:45.478 + cat /tmp/62.n4D 00:06:45.478 + echo '=== End of file: /tmp/62.n4D ===' 00:06:45.478 + echo '' 00:06:45.478 + echo '=== Start of file: /tmp/spdk_tgt_config.json.zco ===' 00:06:45.478 + cat /tmp/spdk_tgt_config.json.zco 00:06:45.478 + echo '=== End of file: /tmp/spdk_tgt_config.json.zco ===' 00:06:45.478 + echo '' 00:06:45.478 + rm /tmp/62.n4D /tmp/spdk_tgt_config.json.zco 00:06:45.478 + exit 1 00:06:45.478 INFO: configuration change detected. 00:06:45.478 05:59:10 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:06:45.478 05:59:10 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:06:45.478 05:59:10 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:06:45.478 05:59:10 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:45.478 05:59:10 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:45.478 05:59:10 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:06:45.478 05:59:10 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:06:45.478 05:59:10 json_config -- json_config/json_config.sh@324 -- # [[ -n 69699 ]] 00:06:45.478 05:59:10 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:06:45.478 05:59:10 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:06:45.478 05:59:10 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:45.478 05:59:10 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:45.478 05:59:10 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:06:45.478 05:59:10 json_config -- json_config/json_config.sh@200 -- # uname -s 00:06:45.478 05:59:10 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:06:45.478 05:59:10 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:06:45.478 05:59:10 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:06:45.478 05:59:10 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:06:45.478 05:59:10 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:45.478 05:59:10 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:45.478 05:59:10 json_config -- json_config/json_config.sh@330 -- # killprocess 69699 00:06:45.478 05:59:10 json_config -- common/autotest_common.sh@950 -- # '[' -z 69699 ']' 00:06:45.478 05:59:10 json_config -- common/autotest_common.sh@954 -- # kill -0 69699 00:06:45.478 05:59:10 json_config -- common/autotest_common.sh@955 -- # uname 00:06:45.478 05:59:10 json_config -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:45.478 05:59:10 json_config -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69699 00:06:45.478 05:59:10 json_config -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:45.478 05:59:10 json_config -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:45.478 killing process with pid 69699 00:06:45.478 05:59:10 json_config -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69699' 00:06:45.478 05:59:10 json_config -- common/autotest_common.sh@969 -- # kill 69699 00:06:45.478 05:59:10 json_config -- common/autotest_common.sh@974 -- # wait 69699 00:06:45.754 05:59:11 json_config -- json_config/json_config.sh@333 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:06:45.754 05:59:11 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:06:45.754 05:59:11 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:45.754 05:59:11 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:45.754 05:59:11 json_config -- json_config/json_config.sh@335 -- # return 0 00:06:45.754 INFO: Success 00:06:45.754 05:59:11 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:06:45.754 00:06:45.754 real 0m8.465s 00:06:45.754 user 0m12.233s 00:06:45.754 sys 0m1.495s 00:06:45.754 05:59:11 json_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:45.754 05:59:11 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:45.754 ************************************ 00:06:45.754 END TEST json_config 00:06:45.754 ************************************ 00:06:45.754 05:59:11 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:06:45.754 05:59:11 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:45.754 05:59:11 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:45.754 05:59:11 -- common/autotest_common.sh@10 -- # set +x 00:06:45.754 ************************************ 00:06:45.754 START TEST json_config_extra_key 00:06:45.754 ************************************ 00:06:45.754 05:59:11 json_config_extra_key -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:06:45.754 05:59:11 json_config_extra_key -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:45.754 05:59:11 json_config_extra_key -- common/autotest_common.sh@1681 -- # lcov --version 00:06:45.754 05:59:11 json_config_extra_key -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:46.030 05:59:11 json_config_extra_key -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:46.030 05:59:11 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:46.030 05:59:11 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:46.030 05:59:11 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:46.030 05:59:11 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:06:46.030 05:59:11 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:06:46.030 05:59:11 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:06:46.030 05:59:11 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:06:46.030 05:59:11 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:06:46.030 05:59:11 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:06:46.030 05:59:11 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:06:46.030 05:59:11 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:46.030 05:59:11 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:06:46.030 05:59:11 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:06:46.030 05:59:11 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:46.030 05:59:11 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:46.030 05:59:11 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:06:46.030 05:59:11 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:06:46.030 05:59:11 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:46.030 05:59:11 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:06:46.030 05:59:11 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:06:46.030 05:59:11 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:06:46.030 05:59:11 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:06:46.030 05:59:11 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:46.030 05:59:11 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:06:46.030 05:59:11 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:06:46.030 05:59:11 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:46.030 05:59:11 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:46.030 05:59:11 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:06:46.030 05:59:11 json_config_extra_key -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:46.030 05:59:11 json_config_extra_key -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:46.030 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:46.030 --rc genhtml_branch_coverage=1 00:06:46.030 --rc genhtml_function_coverage=1 00:06:46.030 --rc genhtml_legend=1 00:06:46.030 --rc geninfo_all_blocks=1 00:06:46.030 --rc geninfo_unexecuted_blocks=1 00:06:46.030 00:06:46.030 ' 00:06:46.030 05:59:11 json_config_extra_key -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:46.030 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:46.030 --rc genhtml_branch_coverage=1 00:06:46.030 --rc genhtml_function_coverage=1 00:06:46.030 --rc genhtml_legend=1 00:06:46.030 --rc geninfo_all_blocks=1 00:06:46.030 --rc geninfo_unexecuted_blocks=1 00:06:46.030 00:06:46.030 ' 00:06:46.030 05:59:11 json_config_extra_key -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:46.030 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:46.030 --rc genhtml_branch_coverage=1 00:06:46.030 --rc genhtml_function_coverage=1 00:06:46.030 --rc genhtml_legend=1 00:06:46.030 --rc geninfo_all_blocks=1 00:06:46.030 --rc geninfo_unexecuted_blocks=1 00:06:46.030 00:06:46.030 ' 00:06:46.030 05:59:11 json_config_extra_key -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:46.030 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:46.030 --rc genhtml_branch_coverage=1 00:06:46.030 --rc genhtml_function_coverage=1 00:06:46.030 --rc genhtml_legend=1 00:06:46.030 --rc geninfo_all_blocks=1 00:06:46.030 --rc geninfo_unexecuted_blocks=1 00:06:46.030 00:06:46.030 ' 00:06:46.030 05:59:11 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:46.030 05:59:11 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:06:46.030 05:59:11 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:46.030 05:59:11 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:46.030 05:59:11 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:46.030 05:59:11 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:46.030 05:59:11 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:46.030 05:59:11 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:46.030 05:59:11 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:46.030 05:59:11 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:46.030 05:59:11 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:46.030 05:59:11 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:46.030 05:59:11 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 00:06:46.030 05:59:11 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=a979a798-a221-4879-b3c4-5aaa753fde06 00:06:46.030 05:59:11 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:46.030 05:59:11 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:46.030 05:59:11 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:46.031 05:59:11 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:46.031 05:59:11 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:46.031 05:59:11 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:06:46.031 05:59:11 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:46.031 05:59:11 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:46.031 05:59:11 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:46.031 05:59:11 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:46.031 05:59:11 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:46.031 05:59:11 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:46.031 05:59:11 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:06:46.031 05:59:11 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:46.031 05:59:11 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:06:46.031 05:59:11 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:46.031 05:59:11 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:46.031 05:59:11 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:46.031 05:59:11 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:46.031 05:59:11 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:46.031 05:59:11 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:46.031 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:46.031 05:59:11 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:46.031 05:59:11 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:46.031 05:59:11 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:46.031 05:59:11 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:06:46.031 05:59:11 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:06:46.031 05:59:11 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:06:46.031 05:59:11 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:06:46.031 05:59:11 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:06:46.031 05:59:11 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:06:46.031 05:59:11 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:06:46.031 05:59:11 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:06:46.031 05:59:11 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:06:46.031 05:59:11 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:46.031 INFO: launching applications... 00:06:46.031 05:59:11 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:06:46.031 05:59:11 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:06:46.031 05:59:11 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:06:46.031 05:59:11 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:06:46.031 05:59:11 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:46.031 05:59:11 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:46.031 05:59:11 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:06:46.031 05:59:11 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:46.031 05:59:11 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:46.031 05:59:11 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=69853 00:06:46.031 Waiting for target to run... 00:06:46.031 05:59:11 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:46.031 05:59:11 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 69853 /var/tmp/spdk_tgt.sock 00:06:46.031 05:59:11 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:06:46.031 05:59:11 json_config_extra_key -- common/autotest_common.sh@831 -- # '[' -z 69853 ']' 00:06:46.031 05:59:11 json_config_extra_key -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:46.031 05:59:11 json_config_extra_key -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:46.031 05:59:11 json_config_extra_key -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:46.031 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:46.031 05:59:11 json_config_extra_key -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:46.031 05:59:11 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:46.031 [2024-10-01 05:59:11.494742] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:06:46.031 [2024-10-01 05:59:11.494860] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69853 ] 00:06:46.290 [2024-10-01 05:59:11.790624] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.290 [2024-10-01 05:59:11.811650] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.290 [2024-10-01 05:59:11.835455] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:46.860 05:59:12 json_config_extra_key -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:46.860 05:59:12 json_config_extra_key -- common/autotest_common.sh@864 -- # return 0 00:06:46.860 00:06:46.860 05:59:12 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:06:46.860 INFO: shutting down applications... 00:06:46.860 05:59:12 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:06:46.861 05:59:12 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:06:46.861 05:59:12 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:06:46.861 05:59:12 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:46.861 05:59:12 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 69853 ]] 00:06:46.861 05:59:12 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 69853 00:06:46.861 05:59:12 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:46.861 05:59:12 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:46.861 05:59:12 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 69853 00:06:46.861 05:59:12 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:47.431 05:59:12 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:47.431 05:59:12 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:47.431 05:59:12 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 69853 00:06:47.431 05:59:12 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:47.431 05:59:12 json_config_extra_key -- json_config/common.sh@43 -- # break 00:06:47.431 05:59:12 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:47.431 SPDK target shutdown done 00:06:47.431 05:59:12 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:47.431 Success 00:06:47.431 05:59:12 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:06:47.431 00:06:47.431 real 0m1.703s 00:06:47.431 user 0m1.562s 00:06:47.431 sys 0m0.302s 00:06:47.431 05:59:12 json_config_extra_key -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:47.431 ************************************ 00:06:47.431 END TEST json_config_extra_key 00:06:47.431 05:59:12 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:47.431 ************************************ 00:06:47.431 05:59:12 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:47.431 05:59:12 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:47.431 05:59:12 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:47.431 05:59:12 -- common/autotest_common.sh@10 -- # set +x 00:06:47.431 ************************************ 00:06:47.431 START TEST alias_rpc 00:06:47.431 ************************************ 00:06:47.431 05:59:13 alias_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:47.691 * Looking for test storage... 00:06:47.691 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:06:47.691 05:59:13 alias_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:47.691 05:59:13 alias_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:06:47.691 05:59:13 alias_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:47.691 05:59:13 alias_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:47.691 05:59:13 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:47.691 05:59:13 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:47.691 05:59:13 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:47.691 05:59:13 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:47.691 05:59:13 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:47.691 05:59:13 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:47.691 05:59:13 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:47.691 05:59:13 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:47.691 05:59:13 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:47.691 05:59:13 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:47.691 05:59:13 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:47.691 05:59:13 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:47.691 05:59:13 alias_rpc -- scripts/common.sh@345 -- # : 1 00:06:47.691 05:59:13 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:47.691 05:59:13 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:47.691 05:59:13 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:06:47.691 05:59:13 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:06:47.691 05:59:13 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:47.691 05:59:13 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:06:47.691 05:59:13 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:47.691 05:59:13 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:06:47.691 05:59:13 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:06:47.691 05:59:13 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:47.691 05:59:13 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:06:47.691 05:59:13 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:47.691 05:59:13 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:47.691 05:59:13 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:47.691 05:59:13 alias_rpc -- scripts/common.sh@368 -- # return 0 00:06:47.691 05:59:13 alias_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:47.691 05:59:13 alias_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:47.691 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:47.691 --rc genhtml_branch_coverage=1 00:06:47.691 --rc genhtml_function_coverage=1 00:06:47.691 --rc genhtml_legend=1 00:06:47.691 --rc geninfo_all_blocks=1 00:06:47.691 --rc geninfo_unexecuted_blocks=1 00:06:47.691 00:06:47.691 ' 00:06:47.691 05:59:13 alias_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:47.691 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:47.691 --rc genhtml_branch_coverage=1 00:06:47.691 --rc genhtml_function_coverage=1 00:06:47.691 --rc genhtml_legend=1 00:06:47.691 --rc geninfo_all_blocks=1 00:06:47.691 --rc geninfo_unexecuted_blocks=1 00:06:47.691 00:06:47.691 ' 00:06:47.691 05:59:13 alias_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:47.691 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:47.691 --rc genhtml_branch_coverage=1 00:06:47.691 --rc genhtml_function_coverage=1 00:06:47.691 --rc genhtml_legend=1 00:06:47.691 --rc geninfo_all_blocks=1 00:06:47.691 --rc geninfo_unexecuted_blocks=1 00:06:47.691 00:06:47.691 ' 00:06:47.691 05:59:13 alias_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:47.691 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:47.691 --rc genhtml_branch_coverage=1 00:06:47.691 --rc genhtml_function_coverage=1 00:06:47.691 --rc genhtml_legend=1 00:06:47.691 --rc geninfo_all_blocks=1 00:06:47.691 --rc geninfo_unexecuted_blocks=1 00:06:47.691 00:06:47.691 ' 00:06:47.691 05:59:13 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:47.691 05:59:13 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=69925 00:06:47.691 05:59:13 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:47.691 05:59:13 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 69925 00:06:47.691 05:59:13 alias_rpc -- common/autotest_common.sh@831 -- # '[' -z 69925 ']' 00:06:47.691 05:59:13 alias_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:47.691 05:59:13 alias_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:47.691 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:47.691 05:59:13 alias_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:47.691 05:59:13 alias_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:47.691 05:59:13 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:47.691 [2024-10-01 05:59:13.263436] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:06:47.691 [2024-10-01 05:59:13.263568] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69925 ] 00:06:47.951 [2024-10-01 05:59:13.396288] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.951 [2024-10-01 05:59:13.430597] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.951 [2024-10-01 05:59:13.465796] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:48.211 05:59:13 alias_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:48.211 05:59:13 alias_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:48.211 05:59:13 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:06:48.470 05:59:13 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 69925 00:06:48.470 05:59:13 alias_rpc -- common/autotest_common.sh@950 -- # '[' -z 69925 ']' 00:06:48.470 05:59:13 alias_rpc -- common/autotest_common.sh@954 -- # kill -0 69925 00:06:48.470 05:59:13 alias_rpc -- common/autotest_common.sh@955 -- # uname 00:06:48.470 05:59:13 alias_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:48.470 05:59:13 alias_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69925 00:06:48.470 05:59:13 alias_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:48.470 05:59:13 alias_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:48.470 killing process with pid 69925 00:06:48.470 05:59:13 alias_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69925' 00:06:48.470 05:59:13 alias_rpc -- common/autotest_common.sh@969 -- # kill 69925 00:06:48.470 05:59:13 alias_rpc -- common/autotest_common.sh@974 -- # wait 69925 00:06:48.730 00:06:48.730 real 0m1.208s 00:06:48.730 user 0m1.448s 00:06:48.730 sys 0m0.320s 00:06:48.730 05:59:14 alias_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:48.730 05:59:14 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:48.730 ************************************ 00:06:48.730 END TEST alias_rpc 00:06:48.730 ************************************ 00:06:48.730 05:59:14 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:06:48.730 05:59:14 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:06:48.730 05:59:14 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:48.730 05:59:14 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:48.730 05:59:14 -- common/autotest_common.sh@10 -- # set +x 00:06:48.730 ************************************ 00:06:48.730 START TEST spdkcli_tcp 00:06:48.730 ************************************ 00:06:48.730 05:59:14 spdkcli_tcp -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:06:48.730 * Looking for test storage... 00:06:48.989 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:06:48.989 05:59:14 spdkcli_tcp -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:48.989 05:59:14 spdkcli_tcp -- common/autotest_common.sh@1681 -- # lcov --version 00:06:48.989 05:59:14 spdkcli_tcp -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:48.989 05:59:14 spdkcli_tcp -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:48.989 05:59:14 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:48.989 05:59:14 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:48.989 05:59:14 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:48.989 05:59:14 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:06:48.989 05:59:14 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:06:48.989 05:59:14 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:06:48.989 05:59:14 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:06:48.989 05:59:14 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:06:48.989 05:59:14 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:06:48.989 05:59:14 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:06:48.989 05:59:14 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:48.989 05:59:14 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:06:48.989 05:59:14 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:06:48.989 05:59:14 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:48.989 05:59:14 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:48.989 05:59:14 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:06:48.989 05:59:14 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:06:48.989 05:59:14 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:48.989 05:59:14 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:06:48.989 05:59:14 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:06:48.989 05:59:14 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:06:48.989 05:59:14 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:06:48.989 05:59:14 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:48.989 05:59:14 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:06:48.989 05:59:14 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:06:48.990 05:59:14 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:48.990 05:59:14 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:48.990 05:59:14 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:06:48.990 05:59:14 spdkcli_tcp -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:48.990 05:59:14 spdkcli_tcp -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:48.990 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:48.990 --rc genhtml_branch_coverage=1 00:06:48.990 --rc genhtml_function_coverage=1 00:06:48.990 --rc genhtml_legend=1 00:06:48.990 --rc geninfo_all_blocks=1 00:06:48.990 --rc geninfo_unexecuted_blocks=1 00:06:48.990 00:06:48.990 ' 00:06:48.990 05:59:14 spdkcli_tcp -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:48.990 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:48.990 --rc genhtml_branch_coverage=1 00:06:48.990 --rc genhtml_function_coverage=1 00:06:48.990 --rc genhtml_legend=1 00:06:48.990 --rc geninfo_all_blocks=1 00:06:48.990 --rc geninfo_unexecuted_blocks=1 00:06:48.990 00:06:48.990 ' 00:06:48.990 05:59:14 spdkcli_tcp -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:48.990 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:48.990 --rc genhtml_branch_coverage=1 00:06:48.990 --rc genhtml_function_coverage=1 00:06:48.990 --rc genhtml_legend=1 00:06:48.990 --rc geninfo_all_blocks=1 00:06:48.990 --rc geninfo_unexecuted_blocks=1 00:06:48.990 00:06:48.990 ' 00:06:48.990 05:59:14 spdkcli_tcp -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:48.990 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:48.990 --rc genhtml_branch_coverage=1 00:06:48.990 --rc genhtml_function_coverage=1 00:06:48.990 --rc genhtml_legend=1 00:06:48.990 --rc geninfo_all_blocks=1 00:06:48.990 --rc geninfo_unexecuted_blocks=1 00:06:48.990 00:06:48.990 ' 00:06:48.990 05:59:14 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:06:48.990 05:59:14 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:06:48.990 05:59:14 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:06:48.990 05:59:14 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:48.990 05:59:14 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:48.990 05:59:14 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:48.990 05:59:14 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:48.990 05:59:14 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:48.990 05:59:14 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:48.990 05:59:14 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=70002 00:06:48.990 05:59:14 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:48.990 05:59:14 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 70002 00:06:48.990 05:59:14 spdkcli_tcp -- common/autotest_common.sh@831 -- # '[' -z 70002 ']' 00:06:48.990 05:59:14 spdkcli_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:48.990 05:59:14 spdkcli_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:48.990 05:59:14 spdkcli_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:48.990 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:48.990 05:59:14 spdkcli_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:48.990 05:59:14 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:48.990 [2024-10-01 05:59:14.518616] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:06:48.990 [2024-10-01 05:59:14.519546] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70002 ] 00:06:49.250 [2024-10-01 05:59:14.651161] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:49.250 [2024-10-01 05:59:14.685677] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:49.250 [2024-10-01 05:59:14.685689] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.250 [2024-10-01 05:59:14.722021] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:50.188 05:59:15 spdkcli_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:50.188 05:59:15 spdkcli_tcp -- common/autotest_common.sh@864 -- # return 0 00:06:50.188 05:59:15 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:50.188 05:59:15 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=70019 00:06:50.188 05:59:15 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:50.188 [ 00:06:50.188 "bdev_malloc_delete", 00:06:50.188 "bdev_malloc_create", 00:06:50.188 "bdev_null_resize", 00:06:50.188 "bdev_null_delete", 00:06:50.188 "bdev_null_create", 00:06:50.188 "bdev_nvme_cuse_unregister", 00:06:50.188 "bdev_nvme_cuse_register", 00:06:50.188 "bdev_opal_new_user", 00:06:50.188 "bdev_opal_set_lock_state", 00:06:50.188 "bdev_opal_delete", 00:06:50.188 "bdev_opal_get_info", 00:06:50.188 "bdev_opal_create", 00:06:50.188 "bdev_nvme_opal_revert", 00:06:50.188 "bdev_nvme_opal_init", 00:06:50.188 "bdev_nvme_send_cmd", 00:06:50.188 "bdev_nvme_set_keys", 00:06:50.188 "bdev_nvme_get_path_iostat", 00:06:50.188 "bdev_nvme_get_mdns_discovery_info", 00:06:50.188 "bdev_nvme_stop_mdns_discovery", 00:06:50.188 "bdev_nvme_start_mdns_discovery", 00:06:50.188 "bdev_nvme_set_multipath_policy", 00:06:50.188 "bdev_nvme_set_preferred_path", 00:06:50.188 "bdev_nvme_get_io_paths", 00:06:50.188 "bdev_nvme_remove_error_injection", 00:06:50.188 "bdev_nvme_add_error_injection", 00:06:50.188 "bdev_nvme_get_discovery_info", 00:06:50.188 "bdev_nvme_stop_discovery", 00:06:50.188 "bdev_nvme_start_discovery", 00:06:50.188 "bdev_nvme_get_controller_health_info", 00:06:50.188 "bdev_nvme_disable_controller", 00:06:50.188 "bdev_nvme_enable_controller", 00:06:50.188 "bdev_nvme_reset_controller", 00:06:50.188 "bdev_nvme_get_transport_statistics", 00:06:50.188 "bdev_nvme_apply_firmware", 00:06:50.188 "bdev_nvme_detach_controller", 00:06:50.188 "bdev_nvme_get_controllers", 00:06:50.188 "bdev_nvme_attach_controller", 00:06:50.188 "bdev_nvme_set_hotplug", 00:06:50.188 "bdev_nvme_set_options", 00:06:50.188 "bdev_passthru_delete", 00:06:50.188 "bdev_passthru_create", 00:06:50.188 "bdev_lvol_set_parent_bdev", 00:06:50.188 "bdev_lvol_set_parent", 00:06:50.188 "bdev_lvol_check_shallow_copy", 00:06:50.188 "bdev_lvol_start_shallow_copy", 00:06:50.188 "bdev_lvol_grow_lvstore", 00:06:50.188 "bdev_lvol_get_lvols", 00:06:50.188 "bdev_lvol_get_lvstores", 00:06:50.188 "bdev_lvol_delete", 00:06:50.188 "bdev_lvol_set_read_only", 00:06:50.188 "bdev_lvol_resize", 00:06:50.188 "bdev_lvol_decouple_parent", 00:06:50.188 "bdev_lvol_inflate", 00:06:50.188 "bdev_lvol_rename", 00:06:50.188 "bdev_lvol_clone_bdev", 00:06:50.188 "bdev_lvol_clone", 00:06:50.188 "bdev_lvol_snapshot", 00:06:50.188 "bdev_lvol_create", 00:06:50.188 "bdev_lvol_delete_lvstore", 00:06:50.188 "bdev_lvol_rename_lvstore", 00:06:50.188 "bdev_lvol_create_lvstore", 00:06:50.188 "bdev_raid_set_options", 00:06:50.188 "bdev_raid_remove_base_bdev", 00:06:50.188 "bdev_raid_add_base_bdev", 00:06:50.188 "bdev_raid_delete", 00:06:50.188 "bdev_raid_create", 00:06:50.188 "bdev_raid_get_bdevs", 00:06:50.188 "bdev_error_inject_error", 00:06:50.188 "bdev_error_delete", 00:06:50.188 "bdev_error_create", 00:06:50.188 "bdev_split_delete", 00:06:50.188 "bdev_split_create", 00:06:50.188 "bdev_delay_delete", 00:06:50.188 "bdev_delay_create", 00:06:50.188 "bdev_delay_update_latency", 00:06:50.188 "bdev_zone_block_delete", 00:06:50.188 "bdev_zone_block_create", 00:06:50.188 "blobfs_create", 00:06:50.188 "blobfs_detect", 00:06:50.189 "blobfs_set_cache_size", 00:06:50.189 "bdev_aio_delete", 00:06:50.189 "bdev_aio_rescan", 00:06:50.189 "bdev_aio_create", 00:06:50.189 "bdev_ftl_set_property", 00:06:50.189 "bdev_ftl_get_properties", 00:06:50.189 "bdev_ftl_get_stats", 00:06:50.189 "bdev_ftl_unmap", 00:06:50.189 "bdev_ftl_unload", 00:06:50.189 "bdev_ftl_delete", 00:06:50.189 "bdev_ftl_load", 00:06:50.189 "bdev_ftl_create", 00:06:50.189 "bdev_virtio_attach_controller", 00:06:50.189 "bdev_virtio_scsi_get_devices", 00:06:50.189 "bdev_virtio_detach_controller", 00:06:50.189 "bdev_virtio_blk_set_hotplug", 00:06:50.189 "bdev_iscsi_delete", 00:06:50.189 "bdev_iscsi_create", 00:06:50.189 "bdev_iscsi_set_options", 00:06:50.189 "bdev_uring_delete", 00:06:50.189 "bdev_uring_rescan", 00:06:50.189 "bdev_uring_create", 00:06:50.189 "accel_error_inject_error", 00:06:50.189 "ioat_scan_accel_module", 00:06:50.189 "dsa_scan_accel_module", 00:06:50.189 "iaa_scan_accel_module", 00:06:50.189 "keyring_file_remove_key", 00:06:50.189 "keyring_file_add_key", 00:06:50.189 "keyring_linux_set_options", 00:06:50.189 "fsdev_aio_delete", 00:06:50.189 "fsdev_aio_create", 00:06:50.189 "iscsi_get_histogram", 00:06:50.189 "iscsi_enable_histogram", 00:06:50.189 "iscsi_set_options", 00:06:50.189 "iscsi_get_auth_groups", 00:06:50.189 "iscsi_auth_group_remove_secret", 00:06:50.189 "iscsi_auth_group_add_secret", 00:06:50.189 "iscsi_delete_auth_group", 00:06:50.189 "iscsi_create_auth_group", 00:06:50.189 "iscsi_set_discovery_auth", 00:06:50.189 "iscsi_get_options", 00:06:50.189 "iscsi_target_node_request_logout", 00:06:50.189 "iscsi_target_node_set_redirect", 00:06:50.189 "iscsi_target_node_set_auth", 00:06:50.189 "iscsi_target_node_add_lun", 00:06:50.189 "iscsi_get_stats", 00:06:50.189 "iscsi_get_connections", 00:06:50.189 "iscsi_portal_group_set_auth", 00:06:50.189 "iscsi_start_portal_group", 00:06:50.189 "iscsi_delete_portal_group", 00:06:50.189 "iscsi_create_portal_group", 00:06:50.189 "iscsi_get_portal_groups", 00:06:50.189 "iscsi_delete_target_node", 00:06:50.189 "iscsi_target_node_remove_pg_ig_maps", 00:06:50.189 "iscsi_target_node_add_pg_ig_maps", 00:06:50.189 "iscsi_create_target_node", 00:06:50.189 "iscsi_get_target_nodes", 00:06:50.189 "iscsi_delete_initiator_group", 00:06:50.189 "iscsi_initiator_group_remove_initiators", 00:06:50.189 "iscsi_initiator_group_add_initiators", 00:06:50.189 "iscsi_create_initiator_group", 00:06:50.189 "iscsi_get_initiator_groups", 00:06:50.189 "nvmf_set_crdt", 00:06:50.189 "nvmf_set_config", 00:06:50.189 "nvmf_set_max_subsystems", 00:06:50.189 "nvmf_stop_mdns_prr", 00:06:50.189 "nvmf_publish_mdns_prr", 00:06:50.189 "nvmf_subsystem_get_listeners", 00:06:50.189 "nvmf_subsystem_get_qpairs", 00:06:50.189 "nvmf_subsystem_get_controllers", 00:06:50.189 "nvmf_get_stats", 00:06:50.189 "nvmf_get_transports", 00:06:50.189 "nvmf_create_transport", 00:06:50.189 "nvmf_get_targets", 00:06:50.189 "nvmf_delete_target", 00:06:50.189 "nvmf_create_target", 00:06:50.189 "nvmf_subsystem_allow_any_host", 00:06:50.189 "nvmf_subsystem_set_keys", 00:06:50.189 "nvmf_subsystem_remove_host", 00:06:50.189 "nvmf_subsystem_add_host", 00:06:50.189 "nvmf_ns_remove_host", 00:06:50.189 "nvmf_ns_add_host", 00:06:50.189 "nvmf_subsystem_remove_ns", 00:06:50.189 "nvmf_subsystem_set_ns_ana_group", 00:06:50.189 "nvmf_subsystem_add_ns", 00:06:50.189 "nvmf_subsystem_listener_set_ana_state", 00:06:50.189 "nvmf_discovery_get_referrals", 00:06:50.189 "nvmf_discovery_remove_referral", 00:06:50.189 "nvmf_discovery_add_referral", 00:06:50.189 "nvmf_subsystem_remove_listener", 00:06:50.189 "nvmf_subsystem_add_listener", 00:06:50.189 "nvmf_delete_subsystem", 00:06:50.189 "nvmf_create_subsystem", 00:06:50.189 "nvmf_get_subsystems", 00:06:50.189 "env_dpdk_get_mem_stats", 00:06:50.189 "nbd_get_disks", 00:06:50.189 "nbd_stop_disk", 00:06:50.189 "nbd_start_disk", 00:06:50.189 "ublk_recover_disk", 00:06:50.189 "ublk_get_disks", 00:06:50.189 "ublk_stop_disk", 00:06:50.189 "ublk_start_disk", 00:06:50.189 "ublk_destroy_target", 00:06:50.189 "ublk_create_target", 00:06:50.189 "virtio_blk_create_transport", 00:06:50.189 "virtio_blk_get_transports", 00:06:50.189 "vhost_controller_set_coalescing", 00:06:50.189 "vhost_get_controllers", 00:06:50.189 "vhost_delete_controller", 00:06:50.189 "vhost_create_blk_controller", 00:06:50.189 "vhost_scsi_controller_remove_target", 00:06:50.189 "vhost_scsi_controller_add_target", 00:06:50.189 "vhost_start_scsi_controller", 00:06:50.189 "vhost_create_scsi_controller", 00:06:50.189 "thread_set_cpumask", 00:06:50.189 "scheduler_set_options", 00:06:50.189 "framework_get_governor", 00:06:50.189 "framework_get_scheduler", 00:06:50.189 "framework_set_scheduler", 00:06:50.189 "framework_get_reactors", 00:06:50.189 "thread_get_io_channels", 00:06:50.189 "thread_get_pollers", 00:06:50.189 "thread_get_stats", 00:06:50.189 "framework_monitor_context_switch", 00:06:50.189 "spdk_kill_instance", 00:06:50.189 "log_enable_timestamps", 00:06:50.189 "log_get_flags", 00:06:50.189 "log_clear_flag", 00:06:50.189 "log_set_flag", 00:06:50.189 "log_get_level", 00:06:50.189 "log_set_level", 00:06:50.189 "log_get_print_level", 00:06:50.189 "log_set_print_level", 00:06:50.189 "framework_enable_cpumask_locks", 00:06:50.189 "framework_disable_cpumask_locks", 00:06:50.189 "framework_wait_init", 00:06:50.189 "framework_start_init", 00:06:50.189 "scsi_get_devices", 00:06:50.189 "bdev_get_histogram", 00:06:50.189 "bdev_enable_histogram", 00:06:50.189 "bdev_set_qos_limit", 00:06:50.189 "bdev_set_qd_sampling_period", 00:06:50.189 "bdev_get_bdevs", 00:06:50.189 "bdev_reset_iostat", 00:06:50.189 "bdev_get_iostat", 00:06:50.189 "bdev_examine", 00:06:50.189 "bdev_wait_for_examine", 00:06:50.189 "bdev_set_options", 00:06:50.189 "accel_get_stats", 00:06:50.189 "accel_set_options", 00:06:50.189 "accel_set_driver", 00:06:50.189 "accel_crypto_key_destroy", 00:06:50.189 "accel_crypto_keys_get", 00:06:50.189 "accel_crypto_key_create", 00:06:50.189 "accel_assign_opc", 00:06:50.189 "accel_get_module_info", 00:06:50.189 "accel_get_opc_assignments", 00:06:50.189 "vmd_rescan", 00:06:50.189 "vmd_remove_device", 00:06:50.189 "vmd_enable", 00:06:50.189 "sock_get_default_impl", 00:06:50.189 "sock_set_default_impl", 00:06:50.189 "sock_impl_set_options", 00:06:50.189 "sock_impl_get_options", 00:06:50.189 "iobuf_get_stats", 00:06:50.189 "iobuf_set_options", 00:06:50.189 "keyring_get_keys", 00:06:50.189 "framework_get_pci_devices", 00:06:50.189 "framework_get_config", 00:06:50.189 "framework_get_subsystems", 00:06:50.189 "fsdev_set_opts", 00:06:50.189 "fsdev_get_opts", 00:06:50.189 "trace_get_info", 00:06:50.189 "trace_get_tpoint_group_mask", 00:06:50.189 "trace_disable_tpoint_group", 00:06:50.189 "trace_enable_tpoint_group", 00:06:50.189 "trace_clear_tpoint_mask", 00:06:50.189 "trace_set_tpoint_mask", 00:06:50.189 "notify_get_notifications", 00:06:50.189 "notify_get_types", 00:06:50.189 "spdk_get_version", 00:06:50.189 "rpc_get_methods" 00:06:50.189 ] 00:06:50.189 05:59:15 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:50.189 05:59:15 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:50.189 05:59:15 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:50.189 05:59:15 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:50.189 05:59:15 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 70002 00:06:50.189 05:59:15 spdkcli_tcp -- common/autotest_common.sh@950 -- # '[' -z 70002 ']' 00:06:50.189 05:59:15 spdkcli_tcp -- common/autotest_common.sh@954 -- # kill -0 70002 00:06:50.189 05:59:15 spdkcli_tcp -- common/autotest_common.sh@955 -- # uname 00:06:50.449 05:59:15 spdkcli_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:50.449 05:59:15 spdkcli_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70002 00:06:50.449 05:59:15 spdkcli_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:50.449 05:59:15 spdkcli_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:50.449 killing process with pid 70002 00:06:50.449 05:59:15 spdkcli_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70002' 00:06:50.449 05:59:15 spdkcli_tcp -- common/autotest_common.sh@969 -- # kill 70002 00:06:50.449 05:59:15 spdkcli_tcp -- common/autotest_common.sh@974 -- # wait 70002 00:06:50.709 00:06:50.709 real 0m1.816s 00:06:50.709 user 0m3.496s 00:06:50.709 sys 0m0.390s 00:06:50.709 05:59:16 spdkcli_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:50.709 05:59:16 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:50.709 ************************************ 00:06:50.709 END TEST spdkcli_tcp 00:06:50.709 ************************************ 00:06:50.709 05:59:16 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:50.709 05:59:16 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:50.709 05:59:16 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:50.709 05:59:16 -- common/autotest_common.sh@10 -- # set +x 00:06:50.709 ************************************ 00:06:50.709 START TEST dpdk_mem_utility 00:06:50.709 ************************************ 00:06:50.709 05:59:16 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:50.709 * Looking for test storage... 00:06:50.709 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:06:50.709 05:59:16 dpdk_mem_utility -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:50.709 05:59:16 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # lcov --version 00:06:50.709 05:59:16 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:50.709 05:59:16 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:50.709 05:59:16 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:50.709 05:59:16 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:50.709 05:59:16 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:50.709 05:59:16 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:06:50.709 05:59:16 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:06:50.709 05:59:16 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:06:50.709 05:59:16 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:06:50.709 05:59:16 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:06:50.709 05:59:16 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:06:50.709 05:59:16 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:06:50.709 05:59:16 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:50.709 05:59:16 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:06:50.709 05:59:16 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:06:50.709 05:59:16 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:50.709 05:59:16 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:50.709 05:59:16 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:06:50.709 05:59:16 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:06:50.709 05:59:16 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:50.709 05:59:16 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:06:50.709 05:59:16 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:06:50.709 05:59:16 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:06:50.709 05:59:16 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:06:50.709 05:59:16 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:50.709 05:59:16 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:06:50.709 05:59:16 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:06:50.709 05:59:16 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:50.709 05:59:16 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:50.709 05:59:16 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:06:50.709 05:59:16 dpdk_mem_utility -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:50.709 05:59:16 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:50.709 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:50.709 --rc genhtml_branch_coverage=1 00:06:50.709 --rc genhtml_function_coverage=1 00:06:50.709 --rc genhtml_legend=1 00:06:50.709 --rc geninfo_all_blocks=1 00:06:50.709 --rc geninfo_unexecuted_blocks=1 00:06:50.709 00:06:50.709 ' 00:06:50.709 05:59:16 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:50.709 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:50.709 --rc genhtml_branch_coverage=1 00:06:50.709 --rc genhtml_function_coverage=1 00:06:50.709 --rc genhtml_legend=1 00:06:50.709 --rc geninfo_all_blocks=1 00:06:50.709 --rc geninfo_unexecuted_blocks=1 00:06:50.709 00:06:50.709 ' 00:06:50.709 05:59:16 dpdk_mem_utility -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:50.709 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:50.709 --rc genhtml_branch_coverage=1 00:06:50.709 --rc genhtml_function_coverage=1 00:06:50.709 --rc genhtml_legend=1 00:06:50.709 --rc geninfo_all_blocks=1 00:06:50.709 --rc geninfo_unexecuted_blocks=1 00:06:50.709 00:06:50.709 ' 00:06:50.709 05:59:16 dpdk_mem_utility -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:50.709 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:50.709 --rc genhtml_branch_coverage=1 00:06:50.709 --rc genhtml_function_coverage=1 00:06:50.709 --rc genhtml_legend=1 00:06:50.709 --rc geninfo_all_blocks=1 00:06:50.709 --rc geninfo_unexecuted_blocks=1 00:06:50.709 00:06:50.709 ' 00:06:50.709 05:59:16 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:50.709 05:59:16 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=70095 00:06:50.709 05:59:16 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 70095 00:06:50.709 05:59:16 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:50.709 05:59:16 dpdk_mem_utility -- common/autotest_common.sh@831 -- # '[' -z 70095 ']' 00:06:50.709 05:59:16 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:50.709 05:59:16 dpdk_mem_utility -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:50.709 05:59:16 dpdk_mem_utility -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:50.709 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:50.709 05:59:16 dpdk_mem_utility -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:50.709 05:59:16 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:50.969 [2024-10-01 05:59:16.368678] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:06:50.969 [2024-10-01 05:59:16.368753] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70095 ] 00:06:50.969 [2024-10-01 05:59:16.499856] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.969 [2024-10-01 05:59:16.535473] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.969 [2024-10-01 05:59:16.570957] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:51.230 05:59:16 dpdk_mem_utility -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:51.230 05:59:16 dpdk_mem_utility -- common/autotest_common.sh@864 -- # return 0 00:06:51.230 05:59:16 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:51.230 05:59:16 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:51.230 05:59:16 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:51.230 05:59:16 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:51.230 { 00:06:51.230 "filename": "/tmp/spdk_mem_dump.txt" 00:06:51.230 } 00:06:51.230 05:59:16 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:51.230 05:59:16 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:51.230 DPDK memory size 860.000000 MiB in 1 heap(s) 00:06:51.230 1 heaps totaling size 860.000000 MiB 00:06:51.230 size: 860.000000 MiB heap id: 0 00:06:51.230 end heaps---------- 00:06:51.230 9 mempools totaling size 642.649841 MiB 00:06:51.230 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:51.230 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:51.230 size: 92.545471 MiB name: bdev_io_70095 00:06:51.230 size: 51.011292 MiB name: evtpool_70095 00:06:51.230 size: 50.003479 MiB name: msgpool_70095 00:06:51.230 size: 36.509338 MiB name: fsdev_io_70095 00:06:51.230 size: 21.763794 MiB name: PDU_Pool 00:06:51.230 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:51.230 size: 0.026123 MiB name: Session_Pool 00:06:51.230 end mempools------- 00:06:51.230 6 memzones totaling size 4.142822 MiB 00:06:51.230 size: 1.000366 MiB name: RG_ring_0_70095 00:06:51.230 size: 1.000366 MiB name: RG_ring_1_70095 00:06:51.230 size: 1.000366 MiB name: RG_ring_4_70095 00:06:51.230 size: 1.000366 MiB name: RG_ring_5_70095 00:06:51.230 size: 0.125366 MiB name: RG_ring_2_70095 00:06:51.230 size: 0.015991 MiB name: RG_ring_3_70095 00:06:51.230 end memzones------- 00:06:51.230 05:59:16 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:06:51.230 heap id: 0 total size: 860.000000 MiB number of busy elements: 318 number of free elements: 16 00:06:51.230 list of free elements. size: 13.934509 MiB 00:06:51.230 element at address: 0x200000400000 with size: 1.999512 MiB 00:06:51.230 element at address: 0x200000800000 with size: 1.996948 MiB 00:06:51.230 element at address: 0x20001bc00000 with size: 0.999878 MiB 00:06:51.230 element at address: 0x20001be00000 with size: 0.999878 MiB 00:06:51.230 element at address: 0x200034a00000 with size: 0.994446 MiB 00:06:51.230 element at address: 0x200009600000 with size: 0.959839 MiB 00:06:51.230 element at address: 0x200015e00000 with size: 0.954285 MiB 00:06:51.230 element at address: 0x20001c000000 with size: 0.936584 MiB 00:06:51.230 element at address: 0x200000200000 with size: 0.835022 MiB 00:06:51.230 element at address: 0x20001d800000 with size: 0.565125 MiB 00:06:51.230 element at address: 0x200003e00000 with size: 0.489563 MiB 00:06:51.230 element at address: 0x20000d800000 with size: 0.489441 MiB 00:06:51.230 element at address: 0x20001c200000 with size: 0.485657 MiB 00:06:51.230 element at address: 0x200007000000 with size: 0.480469 MiB 00:06:51.230 element at address: 0x20002ac00000 with size: 0.395752 MiB 00:06:51.230 element at address: 0x200003a00000 with size: 0.352112 MiB 00:06:51.230 list of standard malloc elements. size: 199.268799 MiB 00:06:51.230 element at address: 0x20000d9fff80 with size: 132.000122 MiB 00:06:51.230 element at address: 0x2000097fff80 with size: 64.000122 MiB 00:06:51.230 element at address: 0x20001bcfff80 with size: 1.000122 MiB 00:06:51.230 element at address: 0x20001befff80 with size: 1.000122 MiB 00:06:51.230 element at address: 0x20001c0fff80 with size: 1.000122 MiB 00:06:51.230 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:06:51.230 element at address: 0x20001c0eff00 with size: 0.062622 MiB 00:06:51.230 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:06:51.230 element at address: 0x20001c0efdc0 with size: 0.000305 MiB 00:06:51.230 element at address: 0x2000002d5c40 with size: 0.000183 MiB 00:06:51.230 element at address: 0x2000002d5d00 with size: 0.000183 MiB 00:06:51.230 element at address: 0x2000002d5dc0 with size: 0.000183 MiB 00:06:51.230 element at address: 0x2000002d5e80 with size: 0.000183 MiB 00:06:51.230 element at address: 0x2000002d5f40 with size: 0.000183 MiB 00:06:51.230 element at address: 0x2000002d6000 with size: 0.000183 MiB 00:06:51.230 element at address: 0x2000002d60c0 with size: 0.000183 MiB 00:06:51.230 element at address: 0x2000002d6180 with size: 0.000183 MiB 00:06:51.230 element at address: 0x2000002d6240 with size: 0.000183 MiB 00:06:51.230 element at address: 0x2000002d6300 with size: 0.000183 MiB 00:06:51.230 element at address: 0x2000002d63c0 with size: 0.000183 MiB 00:06:51.230 element at address: 0x2000002d6480 with size: 0.000183 MiB 00:06:51.230 element at address: 0x2000002d6540 with size: 0.000183 MiB 00:06:51.230 element at address: 0x2000002d6600 with size: 0.000183 MiB 00:06:51.230 element at address: 0x2000002d66c0 with size: 0.000183 MiB 00:06:51.230 element at address: 0x2000002d68c0 with size: 0.000183 MiB 00:06:51.230 element at address: 0x2000002d6980 with size: 0.000183 MiB 00:06:51.230 element at address: 0x2000002d6a40 with size: 0.000183 MiB 00:06:51.230 element at address: 0x2000002d6b00 with size: 0.000183 MiB 00:06:51.230 element at address: 0x2000002d6bc0 with size: 0.000183 MiB 00:06:51.230 element at address: 0x2000002d6c80 with size: 0.000183 MiB 00:06:51.230 element at address: 0x2000002d6d40 with size: 0.000183 MiB 00:06:51.230 element at address: 0x2000002d6e00 with size: 0.000183 MiB 00:06:51.230 element at address: 0x2000002d6ec0 with size: 0.000183 MiB 00:06:51.230 element at address: 0x2000002d6f80 with size: 0.000183 MiB 00:06:51.230 element at address: 0x2000002d7040 with size: 0.000183 MiB 00:06:51.230 element at address: 0x2000002d7100 with size: 0.000183 MiB 00:06:51.230 element at address: 0x2000002d71c0 with size: 0.000183 MiB 00:06:51.230 element at address: 0x2000002d7280 with size: 0.000183 MiB 00:06:51.230 element at address: 0x2000002d7340 with size: 0.000183 MiB 00:06:51.230 element at address: 0x2000002d7400 with size: 0.000183 MiB 00:06:51.230 element at address: 0x2000002d74c0 with size: 0.000183 MiB 00:06:51.230 element at address: 0x2000002d7580 with size: 0.000183 MiB 00:06:51.230 element at address: 0x2000002d7640 with size: 0.000183 MiB 00:06:51.230 element at address: 0x2000002d7700 with size: 0.000183 MiB 00:06:51.230 element at address: 0x2000002d77c0 with size: 0.000183 MiB 00:06:51.230 element at address: 0x2000002d7880 with size: 0.000183 MiB 00:06:51.230 element at address: 0x2000002d7940 with size: 0.000183 MiB 00:06:51.230 element at address: 0x2000002d7a00 with size: 0.000183 MiB 00:06:51.230 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:06:51.230 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:06:51.230 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:06:51.230 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:06:51.230 element at address: 0x200003a5a240 with size: 0.000183 MiB 00:06:51.230 element at address: 0x200003a5a440 with size: 0.000183 MiB 00:06:51.230 element at address: 0x200003a5e700 with size: 0.000183 MiB 00:06:51.230 element at address: 0x200003a7e9c0 with size: 0.000183 MiB 00:06:51.230 element at address: 0x200003a7ea80 with size: 0.000183 MiB 00:06:51.230 element at address: 0x200003a7eb40 with size: 0.000183 MiB 00:06:51.230 element at address: 0x200003a7ec00 with size: 0.000183 MiB 00:06:51.230 element at address: 0x200003a7ecc0 with size: 0.000183 MiB 00:06:51.230 element at address: 0x200003a7ed80 with size: 0.000183 MiB 00:06:51.230 element at address: 0x200003a7ee40 with size: 0.000183 MiB 00:06:51.230 element at address: 0x200003a7ef00 with size: 0.000183 MiB 00:06:51.230 element at address: 0x200003a7efc0 with size: 0.000183 MiB 00:06:51.230 element at address: 0x200003a7f080 with size: 0.000183 MiB 00:06:51.230 element at address: 0x200003a7f140 with size: 0.000183 MiB 00:06:51.230 element at address: 0x200003a7f200 with size: 0.000183 MiB 00:06:51.230 element at address: 0x200003a7f2c0 with size: 0.000183 MiB 00:06:51.230 element at address: 0x200003a7f380 with size: 0.000183 MiB 00:06:51.230 element at address: 0x200003a7f440 with size: 0.000183 MiB 00:06:51.230 element at address: 0x200003a7f500 with size: 0.000183 MiB 00:06:51.230 element at address: 0x200003a7f5c0 with size: 0.000183 MiB 00:06:51.230 element at address: 0x200003aff880 with size: 0.000183 MiB 00:06:51.230 element at address: 0x200003affa80 with size: 0.000183 MiB 00:06:51.231 element at address: 0x200003affb40 with size: 0.000183 MiB 00:06:51.231 element at address: 0x200003e7d540 with size: 0.000183 MiB 00:06:51.231 element at address: 0x200003e7d600 with size: 0.000183 MiB 00:06:51.231 element at address: 0x200003e7d6c0 with size: 0.000183 MiB 00:06:51.231 element at address: 0x200003e7d780 with size: 0.000183 MiB 00:06:51.231 element at address: 0x200003e7d840 with size: 0.000183 MiB 00:06:51.231 element at address: 0x200003e7d900 with size: 0.000183 MiB 00:06:51.231 element at address: 0x200003e7d9c0 with size: 0.000183 MiB 00:06:51.231 element at address: 0x200003e7da80 with size: 0.000183 MiB 00:06:51.231 element at address: 0x200003e7db40 with size: 0.000183 MiB 00:06:51.231 element at address: 0x200003e7dc00 with size: 0.000183 MiB 00:06:51.231 element at address: 0x200003e7dcc0 with size: 0.000183 MiB 00:06:51.231 element at address: 0x200003e7dd80 with size: 0.000183 MiB 00:06:51.231 element at address: 0x200003e7de40 with size: 0.000183 MiB 00:06:51.231 element at address: 0x200003e7df00 with size: 0.000183 MiB 00:06:51.231 element at address: 0x200003e7dfc0 with size: 0.000183 MiB 00:06:51.231 element at address: 0x200003e7e080 with size: 0.000183 MiB 00:06:51.231 element at address: 0x200003e7e140 with size: 0.000183 MiB 00:06:51.231 element at address: 0x200003e7e200 with size: 0.000183 MiB 00:06:51.231 element at address: 0x200003e7e2c0 with size: 0.000183 MiB 00:06:51.231 element at address: 0x200003e7e380 with size: 0.000183 MiB 00:06:51.231 element at address: 0x200003e7e440 with size: 0.000183 MiB 00:06:51.231 element at address: 0x200003e7e500 with size: 0.000183 MiB 00:06:51.231 element at address: 0x200003e7e5c0 with size: 0.000183 MiB 00:06:51.231 element at address: 0x200003e7e680 with size: 0.000183 MiB 00:06:51.231 element at address: 0x200003e7e740 with size: 0.000183 MiB 00:06:51.231 element at address: 0x200003e7e800 with size: 0.000183 MiB 00:06:51.231 element at address: 0x200003e7e8c0 with size: 0.000183 MiB 00:06:51.231 element at address: 0x200003e7e980 with size: 0.000183 MiB 00:06:51.231 element at address: 0x200003e7ea40 with size: 0.000183 MiB 00:06:51.231 element at address: 0x200003e7eb00 with size: 0.000183 MiB 00:06:51.231 element at address: 0x200003e7ebc0 with size: 0.000183 MiB 00:06:51.231 element at address: 0x200003e7ec80 with size: 0.000183 MiB 00:06:51.231 element at address: 0x200003e7ed40 with size: 0.000183 MiB 00:06:51.231 element at address: 0x200003e7ee00 with size: 0.000183 MiB 00:06:51.231 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:06:51.231 element at address: 0x20000707b000 with size: 0.000183 MiB 00:06:51.231 element at address: 0x20000707b0c0 with size: 0.000183 MiB 00:06:51.231 element at address: 0x20000707b180 with size: 0.000183 MiB 00:06:51.231 element at address: 0x20000707b240 with size: 0.000183 MiB 00:06:51.231 element at address: 0x20000707b300 with size: 0.000183 MiB 00:06:51.231 element at address: 0x20000707b3c0 with size: 0.000183 MiB 00:06:51.231 element at address: 0x20000707b480 with size: 0.000183 MiB 00:06:51.231 element at address: 0x20000707b540 with size: 0.000183 MiB 00:06:51.231 element at address: 0x20000707b600 with size: 0.000183 MiB 00:06:51.231 element at address: 0x20000707b6c0 with size: 0.000183 MiB 00:06:51.231 element at address: 0x2000070fb980 with size: 0.000183 MiB 00:06:51.231 element at address: 0x2000096fdd80 with size: 0.000183 MiB 00:06:51.231 element at address: 0x20000d87d4c0 with size: 0.000183 MiB 00:06:51.231 element at address: 0x20000d87d580 with size: 0.000183 MiB 00:06:51.231 element at address: 0x20000d87d640 with size: 0.000183 MiB 00:06:51.231 element at address: 0x20000d87d700 with size: 0.000183 MiB 00:06:51.231 element at address: 0x20000d87d7c0 with size: 0.000183 MiB 00:06:51.231 element at address: 0x20000d87d880 with size: 0.000183 MiB 00:06:51.231 element at address: 0x20000d87d940 with size: 0.000183 MiB 00:06:51.231 element at address: 0x20000d87da00 with size: 0.000183 MiB 00:06:51.231 element at address: 0x20000d87dac0 with size: 0.000183 MiB 00:06:51.231 element at address: 0x20000d8fdd80 with size: 0.000183 MiB 00:06:51.231 element at address: 0x200015ef44c0 with size: 0.000183 MiB 00:06:51.231 element at address: 0x20001c0efc40 with size: 0.000183 MiB 00:06:51.231 element at address: 0x20001c0efd00 with size: 0.000183 MiB 00:06:51.231 element at address: 0x20001c2bc740 with size: 0.000183 MiB 00:06:51.231 element at address: 0x20001d890ac0 with size: 0.000183 MiB 00:06:51.231 element at address: 0x20001d890b80 with size: 0.000183 MiB 00:06:51.231 element at address: 0x20001d890c40 with size: 0.000183 MiB 00:06:51.231 element at address: 0x20001d890d00 with size: 0.000183 MiB 00:06:51.231 element at address: 0x20001d890dc0 with size: 0.000183 MiB 00:06:51.231 element at address: 0x20001d890e80 with size: 0.000183 MiB 00:06:51.231 element at address: 0x20001d890f40 with size: 0.000183 MiB 00:06:51.231 element at address: 0x20001d891000 with size: 0.000183 MiB 00:06:51.231 element at address: 0x20001d8910c0 with size: 0.000183 MiB 00:06:51.231 element at address: 0x20001d891180 with size: 0.000183 MiB 00:06:51.231 element at address: 0x20001d891240 with size: 0.000183 MiB 00:06:51.231 element at address: 0x20001d891300 with size: 0.000183 MiB 00:06:51.231 element at address: 0x20001d8913c0 with size: 0.000183 MiB 00:06:51.231 element at address: 0x20001d891480 with size: 0.000183 MiB 00:06:51.231 element at address: 0x20001d891540 with size: 0.000183 MiB 00:06:51.231 element at address: 0x20001d891600 with size: 0.000183 MiB 00:06:51.231 element at address: 0x20001d8916c0 with size: 0.000183 MiB 00:06:51.231 element at address: 0x20001d891780 with size: 0.000183 MiB 00:06:51.231 element at address: 0x20001d891840 with size: 0.000183 MiB 00:06:51.231 element at address: 0x20001d891900 with size: 0.000183 MiB 00:06:51.231 element at address: 0x20001d8919c0 with size: 0.000183 MiB 00:06:51.231 element at address: 0x20001d891a80 with size: 0.000183 MiB 00:06:51.231 element at address: 0x20001d891b40 with size: 0.000183 MiB 00:06:51.231 element at address: 0x20001d891c00 with size: 0.000183 MiB 00:06:51.231 element at address: 0x20001d891cc0 with size: 0.000183 MiB 00:06:51.231 element at address: 0x20001d891d80 with size: 0.000183 MiB 00:06:51.231 element at address: 0x20001d891e40 with size: 0.000183 MiB 00:06:51.231 element at address: 0x20001d891f00 with size: 0.000183 MiB 00:06:51.231 element at address: 0x20001d891fc0 with size: 0.000183 MiB 00:06:51.231 element at address: 0x20001d892080 with size: 0.000183 MiB 00:06:51.231 element at address: 0x20001d892140 with size: 0.000183 MiB 00:06:51.231 element at address: 0x20001d892200 with size: 0.000183 MiB 00:06:51.231 element at address: 0x20001d8922c0 with size: 0.000183 MiB 00:06:51.231 element at address: 0x20001d892380 with size: 0.000183 MiB 00:06:51.231 element at address: 0x20001d892440 with size: 0.000183 MiB 00:06:51.231 element at address: 0x20001d892500 with size: 0.000183 MiB 00:06:51.231 element at address: 0x20001d8925c0 with size: 0.000183 MiB 00:06:51.231 element at address: 0x20001d892680 with size: 0.000183 MiB 00:06:51.231 element at address: 0x20001d892740 with size: 0.000183 MiB 00:06:51.231 element at address: 0x20001d892800 with size: 0.000183 MiB 00:06:51.231 element at address: 0x20001d8928c0 with size: 0.000183 MiB 00:06:51.231 element at address: 0x20001d892980 with size: 0.000183 MiB 00:06:51.231 element at address: 0x20001d892a40 with size: 0.000183 MiB 00:06:51.231 element at address: 0x20001d892b00 with size: 0.000183 MiB 00:06:51.231 element at address: 0x20001d892bc0 with size: 0.000183 MiB 00:06:51.231 element at address: 0x20001d892c80 with size: 0.000183 MiB 00:06:51.231 element at address: 0x20001d892d40 with size: 0.000183 MiB 00:06:51.231 element at address: 0x20001d892e00 with size: 0.000183 MiB 00:06:51.231 element at address: 0x20001d892ec0 with size: 0.000183 MiB 00:06:51.231 element at address: 0x20001d892f80 with size: 0.000183 MiB 00:06:51.231 element at address: 0x20001d893040 with size: 0.000183 MiB 00:06:51.231 element at address: 0x20001d893100 with size: 0.000183 MiB 00:06:51.231 element at address: 0x20001d8931c0 with size: 0.000183 MiB 00:06:51.231 element at address: 0x20001d893280 with size: 0.000183 MiB 00:06:51.231 element at address: 0x20001d893340 with size: 0.000183 MiB 00:06:51.231 element at address: 0x20001d893400 with size: 0.000183 MiB 00:06:51.231 element at address: 0x20001d8934c0 with size: 0.000183 MiB 00:06:51.231 element at address: 0x20001d893580 with size: 0.000183 MiB 00:06:51.231 element at address: 0x20001d893640 with size: 0.000183 MiB 00:06:51.231 element at address: 0x20001d893700 with size: 0.000183 MiB 00:06:51.231 element at address: 0x20001d8937c0 with size: 0.000183 MiB 00:06:51.231 element at address: 0x20001d893880 with size: 0.000183 MiB 00:06:51.231 element at address: 0x20001d893940 with size: 0.000183 MiB 00:06:51.231 element at address: 0x20001d893a00 with size: 0.000183 MiB 00:06:51.231 element at address: 0x20001d893ac0 with size: 0.000183 MiB 00:06:51.231 element at address: 0x20001d893b80 with size: 0.000183 MiB 00:06:51.231 element at address: 0x20001d893c40 with size: 0.000183 MiB 00:06:51.231 element at address: 0x20001d893d00 with size: 0.000183 MiB 00:06:51.231 element at address: 0x20001d893dc0 with size: 0.000183 MiB 00:06:51.231 element at address: 0x20001d893e80 with size: 0.000183 MiB 00:06:51.231 element at address: 0x20001d893f40 with size: 0.000183 MiB 00:06:51.231 element at address: 0x20001d894000 with size: 0.000183 MiB 00:06:51.231 element at address: 0x20001d8940c0 with size: 0.000183 MiB 00:06:51.231 element at address: 0x20001d894180 with size: 0.000183 MiB 00:06:51.231 element at address: 0x20001d894240 with size: 0.000183 MiB 00:06:51.231 element at address: 0x20001d894300 with size: 0.000183 MiB 00:06:51.231 element at address: 0x20001d8943c0 with size: 0.000183 MiB 00:06:51.231 element at address: 0x20001d894480 with size: 0.000183 MiB 00:06:51.231 element at address: 0x20001d894540 with size: 0.000183 MiB 00:06:51.231 element at address: 0x20001d894600 with size: 0.000183 MiB 00:06:51.231 element at address: 0x20001d8946c0 with size: 0.000183 MiB 00:06:51.231 element at address: 0x20001d894780 with size: 0.000183 MiB 00:06:51.231 element at address: 0x20001d894840 with size: 0.000183 MiB 00:06:51.231 element at address: 0x20001d894900 with size: 0.000183 MiB 00:06:51.232 element at address: 0x20001d8949c0 with size: 0.000183 MiB 00:06:51.232 element at address: 0x20001d894a80 with size: 0.000183 MiB 00:06:51.232 element at address: 0x20001d894b40 with size: 0.000183 MiB 00:06:51.232 element at address: 0x20001d894c00 with size: 0.000183 MiB 00:06:51.232 element at address: 0x20001d894cc0 with size: 0.000183 MiB 00:06:51.232 element at address: 0x20001d894d80 with size: 0.000183 MiB 00:06:51.232 element at address: 0x20001d894e40 with size: 0.000183 MiB 00:06:51.232 element at address: 0x20001d894f00 with size: 0.000183 MiB 00:06:51.232 element at address: 0x20001d894fc0 with size: 0.000183 MiB 00:06:51.232 element at address: 0x20001d895080 with size: 0.000183 MiB 00:06:51.232 element at address: 0x20001d895140 with size: 0.000183 MiB 00:06:51.232 element at address: 0x20001d895200 with size: 0.000183 MiB 00:06:51.232 element at address: 0x20001d8952c0 with size: 0.000183 MiB 00:06:51.232 element at address: 0x20001d895380 with size: 0.000183 MiB 00:06:51.232 element at address: 0x20001d895440 with size: 0.000183 MiB 00:06:51.232 element at address: 0x20002ac65500 with size: 0.000183 MiB 00:06:51.232 element at address: 0x20002ac655c0 with size: 0.000183 MiB 00:06:51.232 element at address: 0x20002ac6c1c0 with size: 0.000183 MiB 00:06:51.232 element at address: 0x20002ac6c3c0 with size: 0.000183 MiB 00:06:51.232 element at address: 0x20002ac6c480 with size: 0.000183 MiB 00:06:51.232 element at address: 0x20002ac6c540 with size: 0.000183 MiB 00:06:51.232 element at address: 0x20002ac6c600 with size: 0.000183 MiB 00:06:51.232 element at address: 0x20002ac6c6c0 with size: 0.000183 MiB 00:06:51.232 element at address: 0x20002ac6c780 with size: 0.000183 MiB 00:06:51.232 element at address: 0x20002ac6c840 with size: 0.000183 MiB 00:06:51.232 element at address: 0x20002ac6c900 with size: 0.000183 MiB 00:06:51.232 element at address: 0x20002ac6c9c0 with size: 0.000183 MiB 00:06:51.232 element at address: 0x20002ac6ca80 with size: 0.000183 MiB 00:06:51.232 element at address: 0x20002ac6cb40 with size: 0.000183 MiB 00:06:51.232 element at address: 0x20002ac6cc00 with size: 0.000183 MiB 00:06:51.232 element at address: 0x20002ac6ccc0 with size: 0.000183 MiB 00:06:51.232 element at address: 0x20002ac6cd80 with size: 0.000183 MiB 00:06:51.232 element at address: 0x20002ac6ce40 with size: 0.000183 MiB 00:06:51.232 element at address: 0x20002ac6cf00 with size: 0.000183 MiB 00:06:51.232 element at address: 0x20002ac6cfc0 with size: 0.000183 MiB 00:06:51.232 element at address: 0x20002ac6d080 with size: 0.000183 MiB 00:06:51.232 element at address: 0x20002ac6d140 with size: 0.000183 MiB 00:06:51.232 element at address: 0x20002ac6d200 with size: 0.000183 MiB 00:06:51.232 element at address: 0x20002ac6d2c0 with size: 0.000183 MiB 00:06:51.232 element at address: 0x20002ac6d380 with size: 0.000183 MiB 00:06:51.232 element at address: 0x20002ac6d440 with size: 0.000183 MiB 00:06:51.232 element at address: 0x20002ac6d500 with size: 0.000183 MiB 00:06:51.232 element at address: 0x20002ac6d5c0 with size: 0.000183 MiB 00:06:51.232 element at address: 0x20002ac6d680 with size: 0.000183 MiB 00:06:51.232 element at address: 0x20002ac6d740 with size: 0.000183 MiB 00:06:51.232 element at address: 0x20002ac6d800 with size: 0.000183 MiB 00:06:51.232 element at address: 0x20002ac6d8c0 with size: 0.000183 MiB 00:06:51.232 element at address: 0x20002ac6d980 with size: 0.000183 MiB 00:06:51.232 element at address: 0x20002ac6da40 with size: 0.000183 MiB 00:06:51.232 element at address: 0x20002ac6db00 with size: 0.000183 MiB 00:06:51.232 element at address: 0x20002ac6dbc0 with size: 0.000183 MiB 00:06:51.232 element at address: 0x20002ac6dc80 with size: 0.000183 MiB 00:06:51.232 element at address: 0x20002ac6dd40 with size: 0.000183 MiB 00:06:51.232 element at address: 0x20002ac6de00 with size: 0.000183 MiB 00:06:51.232 element at address: 0x20002ac6dec0 with size: 0.000183 MiB 00:06:51.232 element at address: 0x20002ac6df80 with size: 0.000183 MiB 00:06:51.232 element at address: 0x20002ac6e040 with size: 0.000183 MiB 00:06:51.232 element at address: 0x20002ac6e100 with size: 0.000183 MiB 00:06:51.232 element at address: 0x20002ac6e1c0 with size: 0.000183 MiB 00:06:51.232 element at address: 0x20002ac6e280 with size: 0.000183 MiB 00:06:51.232 element at address: 0x20002ac6e340 with size: 0.000183 MiB 00:06:51.232 element at address: 0x20002ac6e400 with size: 0.000183 MiB 00:06:51.232 element at address: 0x20002ac6e4c0 with size: 0.000183 MiB 00:06:51.232 element at address: 0x20002ac6e580 with size: 0.000183 MiB 00:06:51.232 element at address: 0x20002ac6e640 with size: 0.000183 MiB 00:06:51.232 element at address: 0x20002ac6e700 with size: 0.000183 MiB 00:06:51.232 element at address: 0x20002ac6e7c0 with size: 0.000183 MiB 00:06:51.232 element at address: 0x20002ac6e880 with size: 0.000183 MiB 00:06:51.232 element at address: 0x20002ac6e940 with size: 0.000183 MiB 00:06:51.232 element at address: 0x20002ac6ea00 with size: 0.000183 MiB 00:06:51.232 element at address: 0x20002ac6eac0 with size: 0.000183 MiB 00:06:51.232 element at address: 0x20002ac6eb80 with size: 0.000183 MiB 00:06:51.232 element at address: 0x20002ac6ec40 with size: 0.000183 MiB 00:06:51.232 element at address: 0x20002ac6ed00 with size: 0.000183 MiB 00:06:51.232 element at address: 0x20002ac6edc0 with size: 0.000183 MiB 00:06:51.232 element at address: 0x20002ac6ee80 with size: 0.000183 MiB 00:06:51.232 element at address: 0x20002ac6ef40 with size: 0.000183 MiB 00:06:51.232 element at address: 0x20002ac6f000 with size: 0.000183 MiB 00:06:51.232 element at address: 0x20002ac6f0c0 with size: 0.000183 MiB 00:06:51.232 element at address: 0x20002ac6f180 with size: 0.000183 MiB 00:06:51.232 element at address: 0x20002ac6f240 with size: 0.000183 MiB 00:06:51.232 element at address: 0x20002ac6f300 with size: 0.000183 MiB 00:06:51.232 element at address: 0x20002ac6f3c0 with size: 0.000183 MiB 00:06:51.232 element at address: 0x20002ac6f480 with size: 0.000183 MiB 00:06:51.232 element at address: 0x20002ac6f540 with size: 0.000183 MiB 00:06:51.232 element at address: 0x20002ac6f600 with size: 0.000183 MiB 00:06:51.232 element at address: 0x20002ac6f6c0 with size: 0.000183 MiB 00:06:51.232 element at address: 0x20002ac6f780 with size: 0.000183 MiB 00:06:51.232 element at address: 0x20002ac6f840 with size: 0.000183 MiB 00:06:51.232 element at address: 0x20002ac6f900 with size: 0.000183 MiB 00:06:51.232 element at address: 0x20002ac6f9c0 with size: 0.000183 MiB 00:06:51.232 element at address: 0x20002ac6fa80 with size: 0.000183 MiB 00:06:51.232 element at address: 0x20002ac6fb40 with size: 0.000183 MiB 00:06:51.232 element at address: 0x20002ac6fc00 with size: 0.000183 MiB 00:06:51.232 element at address: 0x20002ac6fcc0 with size: 0.000183 MiB 00:06:51.232 element at address: 0x20002ac6fd80 with size: 0.000183 MiB 00:06:51.232 element at address: 0x20002ac6fe40 with size: 0.000183 MiB 00:06:51.232 element at address: 0x20002ac6ff00 with size: 0.000183 MiB 00:06:51.232 list of memzone associated elements. size: 646.796692 MiB 00:06:51.232 element at address: 0x20001d895500 with size: 211.416748 MiB 00:06:51.232 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:51.232 element at address: 0x20002ac6ffc0 with size: 157.562561 MiB 00:06:51.232 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:51.232 element at address: 0x200015ff4780 with size: 92.045044 MiB 00:06:51.232 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_70095_0 00:06:51.232 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:06:51.232 associated memzone info: size: 48.002930 MiB name: MP_evtpool_70095_0 00:06:51.232 element at address: 0x200003fff380 with size: 48.003052 MiB 00:06:51.232 associated memzone info: size: 48.002930 MiB name: MP_msgpool_70095_0 00:06:51.232 element at address: 0x2000071fdb80 with size: 36.008911 MiB 00:06:51.232 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_70095_0 00:06:51.232 element at address: 0x20001c3be940 with size: 20.255554 MiB 00:06:51.232 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:51.232 element at address: 0x200034bfeb40 with size: 18.005066 MiB 00:06:51.232 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:51.232 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:06:51.232 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_70095 00:06:51.232 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:06:51.232 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_70095 00:06:51.232 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:06:51.232 associated memzone info: size: 1.007996 MiB name: MP_evtpool_70095 00:06:51.232 element at address: 0x20000d8fde40 with size: 1.008118 MiB 00:06:51.232 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:51.232 element at address: 0x20001c2bc800 with size: 1.008118 MiB 00:06:51.232 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:51.232 element at address: 0x2000096fde40 with size: 1.008118 MiB 00:06:51.232 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:51.232 element at address: 0x2000070fba40 with size: 1.008118 MiB 00:06:51.232 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:51.232 element at address: 0x200003eff180 with size: 1.000488 MiB 00:06:51.232 associated memzone info: size: 1.000366 MiB name: RG_ring_0_70095 00:06:51.232 element at address: 0x200003affc00 with size: 1.000488 MiB 00:06:51.232 associated memzone info: size: 1.000366 MiB name: RG_ring_1_70095 00:06:51.232 element at address: 0x200015ef4580 with size: 1.000488 MiB 00:06:51.232 associated memzone info: size: 1.000366 MiB name: RG_ring_4_70095 00:06:51.232 element at address: 0x200034afe940 with size: 1.000488 MiB 00:06:51.232 associated memzone info: size: 1.000366 MiB name: RG_ring_5_70095 00:06:51.232 element at address: 0x200003a7f680 with size: 0.500488 MiB 00:06:51.232 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_70095 00:06:51.232 element at address: 0x200003e7eec0 with size: 0.500488 MiB 00:06:51.232 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_70095 00:06:51.233 element at address: 0x20000d87db80 with size: 0.500488 MiB 00:06:51.233 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:51.233 element at address: 0x20000707b780 with size: 0.500488 MiB 00:06:51.233 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:51.233 element at address: 0x20001c27c540 with size: 0.250488 MiB 00:06:51.233 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:51.233 element at address: 0x200003a5e7c0 with size: 0.125488 MiB 00:06:51.233 associated memzone info: size: 0.125366 MiB name: RG_ring_2_70095 00:06:51.233 element at address: 0x2000096f5b80 with size: 0.031738 MiB 00:06:51.233 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:51.233 element at address: 0x20002ac65680 with size: 0.023743 MiB 00:06:51.233 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:51.233 element at address: 0x200003a5a500 with size: 0.016113 MiB 00:06:51.233 associated memzone info: size: 0.015991 MiB name: RG_ring_3_70095 00:06:51.233 element at address: 0x20002ac6b7c0 with size: 0.002441 MiB 00:06:51.233 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:51.233 element at address: 0x2000002d6780 with size: 0.000305 MiB 00:06:51.233 associated memzone info: size: 0.000183 MiB name: MP_msgpool_70095 00:06:51.233 element at address: 0x200003aff940 with size: 0.000305 MiB 00:06:51.233 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_70095 00:06:51.233 element at address: 0x200003a5a300 with size: 0.000305 MiB 00:06:51.233 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_70095 00:06:51.233 element at address: 0x20002ac6c280 with size: 0.000305 MiB 00:06:51.233 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:51.233 05:59:16 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:51.233 05:59:16 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 70095 00:06:51.233 05:59:16 dpdk_mem_utility -- common/autotest_common.sh@950 -- # '[' -z 70095 ']' 00:06:51.233 05:59:16 dpdk_mem_utility -- common/autotest_common.sh@954 -- # kill -0 70095 00:06:51.233 05:59:16 dpdk_mem_utility -- common/autotest_common.sh@955 -- # uname 00:06:51.492 05:59:16 dpdk_mem_utility -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:51.492 05:59:16 dpdk_mem_utility -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70095 00:06:51.492 05:59:16 dpdk_mem_utility -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:51.492 05:59:16 dpdk_mem_utility -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:51.492 killing process with pid 70095 00:06:51.492 05:59:16 dpdk_mem_utility -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70095' 00:06:51.492 05:59:16 dpdk_mem_utility -- common/autotest_common.sh@969 -- # kill 70095 00:06:51.492 05:59:16 dpdk_mem_utility -- common/autotest_common.sh@974 -- # wait 70095 00:06:51.751 ************************************ 00:06:51.751 END TEST dpdk_mem_utility 00:06:51.751 ************************************ 00:06:51.751 00:06:51.751 real 0m1.006s 00:06:51.751 user 0m1.058s 00:06:51.751 sys 0m0.325s 00:06:51.751 05:59:17 dpdk_mem_utility -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:51.751 05:59:17 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:51.751 05:59:17 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:51.751 05:59:17 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:51.751 05:59:17 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:51.751 05:59:17 -- common/autotest_common.sh@10 -- # set +x 00:06:51.751 ************************************ 00:06:51.751 START TEST event 00:06:51.751 ************************************ 00:06:51.751 05:59:17 event -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:51.751 * Looking for test storage... 00:06:51.751 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:51.751 05:59:17 event -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:51.751 05:59:17 event -- common/autotest_common.sh@1681 -- # lcov --version 00:06:51.752 05:59:17 event -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:51.752 05:59:17 event -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:51.752 05:59:17 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:51.752 05:59:17 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:51.752 05:59:17 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:51.752 05:59:17 event -- scripts/common.sh@336 -- # IFS=.-: 00:06:51.752 05:59:17 event -- scripts/common.sh@336 -- # read -ra ver1 00:06:51.752 05:59:17 event -- scripts/common.sh@337 -- # IFS=.-: 00:06:51.752 05:59:17 event -- scripts/common.sh@337 -- # read -ra ver2 00:06:51.752 05:59:17 event -- scripts/common.sh@338 -- # local 'op=<' 00:06:51.752 05:59:17 event -- scripts/common.sh@340 -- # ver1_l=2 00:06:51.752 05:59:17 event -- scripts/common.sh@341 -- # ver2_l=1 00:06:51.752 05:59:17 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:51.752 05:59:17 event -- scripts/common.sh@344 -- # case "$op" in 00:06:51.752 05:59:17 event -- scripts/common.sh@345 -- # : 1 00:06:51.752 05:59:17 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:51.752 05:59:17 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:51.752 05:59:17 event -- scripts/common.sh@365 -- # decimal 1 00:06:51.752 05:59:17 event -- scripts/common.sh@353 -- # local d=1 00:06:51.752 05:59:17 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:51.752 05:59:17 event -- scripts/common.sh@355 -- # echo 1 00:06:52.011 05:59:17 event -- scripts/common.sh@365 -- # ver1[v]=1 00:06:52.011 05:59:17 event -- scripts/common.sh@366 -- # decimal 2 00:06:52.011 05:59:17 event -- scripts/common.sh@353 -- # local d=2 00:06:52.011 05:59:17 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:52.011 05:59:17 event -- scripts/common.sh@355 -- # echo 2 00:06:52.011 05:59:17 event -- scripts/common.sh@366 -- # ver2[v]=2 00:06:52.011 05:59:17 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:52.011 05:59:17 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:52.011 05:59:17 event -- scripts/common.sh@368 -- # return 0 00:06:52.011 05:59:17 event -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:52.011 05:59:17 event -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:52.011 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:52.011 --rc genhtml_branch_coverage=1 00:06:52.011 --rc genhtml_function_coverage=1 00:06:52.011 --rc genhtml_legend=1 00:06:52.011 --rc geninfo_all_blocks=1 00:06:52.011 --rc geninfo_unexecuted_blocks=1 00:06:52.011 00:06:52.011 ' 00:06:52.011 05:59:17 event -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:52.011 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:52.011 --rc genhtml_branch_coverage=1 00:06:52.011 --rc genhtml_function_coverage=1 00:06:52.011 --rc genhtml_legend=1 00:06:52.011 --rc geninfo_all_blocks=1 00:06:52.011 --rc geninfo_unexecuted_blocks=1 00:06:52.011 00:06:52.011 ' 00:06:52.011 05:59:17 event -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:52.011 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:52.011 --rc genhtml_branch_coverage=1 00:06:52.011 --rc genhtml_function_coverage=1 00:06:52.011 --rc genhtml_legend=1 00:06:52.011 --rc geninfo_all_blocks=1 00:06:52.011 --rc geninfo_unexecuted_blocks=1 00:06:52.011 00:06:52.011 ' 00:06:52.011 05:59:17 event -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:52.011 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:52.011 --rc genhtml_branch_coverage=1 00:06:52.011 --rc genhtml_function_coverage=1 00:06:52.011 --rc genhtml_legend=1 00:06:52.011 --rc geninfo_all_blocks=1 00:06:52.011 --rc geninfo_unexecuted_blocks=1 00:06:52.011 00:06:52.011 ' 00:06:52.011 05:59:17 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:52.011 05:59:17 event -- bdev/nbd_common.sh@6 -- # set -e 00:06:52.011 05:59:17 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:52.011 05:59:17 event -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:06:52.011 05:59:17 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:52.011 05:59:17 event -- common/autotest_common.sh@10 -- # set +x 00:06:52.011 ************************************ 00:06:52.011 START TEST event_perf 00:06:52.011 ************************************ 00:06:52.011 05:59:17 event.event_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:52.011 Running I/O for 1 seconds...[2024-10-01 05:59:17.404173] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:06:52.011 [2024-10-01 05:59:17.404306] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70172 ] 00:06:52.011 [2024-10-01 05:59:17.540608] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:52.011 [2024-10-01 05:59:17.575102] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:52.011 [2024-10-01 05:59:17.575242] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:06:52.011 [2024-10-01 05:59:17.575308] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:06:52.011 [2024-10-01 05:59:17.575307] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.390 Running I/O for 1 seconds... 00:06:53.390 lcore 0: 194502 00:06:53.390 lcore 1: 194503 00:06:53.390 lcore 2: 194502 00:06:53.390 lcore 3: 194502 00:06:53.390 done. 00:06:53.390 00:06:53.390 real 0m1.241s 00:06:53.390 user 0m4.074s 00:06:53.390 sys 0m0.048s 00:06:53.390 05:59:18 event.event_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:53.390 05:59:18 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:06:53.390 ************************************ 00:06:53.390 END TEST event_perf 00:06:53.390 ************************************ 00:06:53.390 05:59:18 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:53.390 05:59:18 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:06:53.390 05:59:18 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:53.390 05:59:18 event -- common/autotest_common.sh@10 -- # set +x 00:06:53.390 ************************************ 00:06:53.390 START TEST event_reactor 00:06:53.390 ************************************ 00:06:53.390 05:59:18 event.event_reactor -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:53.390 [2024-10-01 05:59:18.693470] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:06:53.390 [2024-10-01 05:59:18.693546] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70206 ] 00:06:53.390 [2024-10-01 05:59:18.825477] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.390 [2024-10-01 05:59:18.857575] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.326 test_start 00:06:54.327 oneshot 00:06:54.327 tick 100 00:06:54.327 tick 100 00:06:54.327 tick 250 00:06:54.327 tick 100 00:06:54.327 tick 100 00:06:54.327 tick 100 00:06:54.327 tick 250 00:06:54.327 tick 500 00:06:54.327 tick 100 00:06:54.327 tick 100 00:06:54.327 tick 250 00:06:54.327 tick 100 00:06:54.327 tick 100 00:06:54.327 test_end 00:06:54.327 00:06:54.327 real 0m1.228s 00:06:54.327 user 0m1.086s 00:06:54.327 sys 0m0.037s 00:06:54.327 05:59:19 event.event_reactor -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:54.327 05:59:19 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:06:54.327 ************************************ 00:06:54.327 END TEST event_reactor 00:06:54.327 ************************************ 00:06:54.586 05:59:19 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:54.586 05:59:19 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:06:54.586 05:59:19 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:54.586 05:59:19 event -- common/autotest_common.sh@10 -- # set +x 00:06:54.586 ************************************ 00:06:54.586 START TEST event_reactor_perf 00:06:54.586 ************************************ 00:06:54.586 05:59:19 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:54.586 [2024-10-01 05:59:19.976608] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:06:54.586 [2024-10-01 05:59:19.976718] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70236 ] 00:06:54.586 [2024-10-01 05:59:20.109323] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.586 [2024-10-01 05:59:20.148117] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.966 test_start 00:06:55.966 test_end 00:06:55.966 Performance: 452966 events per second 00:06:55.966 00:06:55.966 real 0m1.264s 00:06:55.966 user 0m1.114s 00:06:55.966 sys 0m0.046s 00:06:55.966 05:59:21 event.event_reactor_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:55.966 ************************************ 00:06:55.966 END TEST event_reactor_perf 00:06:55.966 05:59:21 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:55.966 ************************************ 00:06:55.966 05:59:21 event -- event/event.sh@49 -- # uname -s 00:06:55.966 05:59:21 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:55.966 05:59:21 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:55.966 05:59:21 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:55.966 05:59:21 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:55.966 05:59:21 event -- common/autotest_common.sh@10 -- # set +x 00:06:55.966 ************************************ 00:06:55.966 START TEST event_scheduler 00:06:55.966 ************************************ 00:06:55.966 05:59:21 event.event_scheduler -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:55.966 * Looking for test storage... 00:06:55.966 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:06:55.966 05:59:21 event.event_scheduler -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:55.966 05:59:21 event.event_scheduler -- common/autotest_common.sh@1681 -- # lcov --version 00:06:55.966 05:59:21 event.event_scheduler -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:55.966 05:59:21 event.event_scheduler -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:55.966 05:59:21 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:55.966 05:59:21 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:55.966 05:59:21 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:55.966 05:59:21 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:06:55.966 05:59:21 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:06:55.966 05:59:21 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:06:55.966 05:59:21 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:06:55.966 05:59:21 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:06:55.966 05:59:21 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:06:55.966 05:59:21 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:06:55.966 05:59:21 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:55.966 05:59:21 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:06:55.966 05:59:21 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:06:55.966 05:59:21 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:55.966 05:59:21 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:55.966 05:59:21 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:06:55.966 05:59:21 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:06:55.966 05:59:21 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:55.966 05:59:21 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:06:55.966 05:59:21 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:06:55.966 05:59:21 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:06:55.966 05:59:21 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:06:55.966 05:59:21 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:55.966 05:59:21 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:06:55.966 05:59:21 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:06:55.966 05:59:21 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:55.966 05:59:21 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:55.966 05:59:21 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:06:55.966 05:59:21 event.event_scheduler -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:55.966 05:59:21 event.event_scheduler -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:55.966 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:55.966 --rc genhtml_branch_coverage=1 00:06:55.966 --rc genhtml_function_coverage=1 00:06:55.966 --rc genhtml_legend=1 00:06:55.966 --rc geninfo_all_blocks=1 00:06:55.966 --rc geninfo_unexecuted_blocks=1 00:06:55.966 00:06:55.966 ' 00:06:55.966 05:59:21 event.event_scheduler -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:55.966 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:55.966 --rc genhtml_branch_coverage=1 00:06:55.966 --rc genhtml_function_coverage=1 00:06:55.966 --rc genhtml_legend=1 00:06:55.966 --rc geninfo_all_blocks=1 00:06:55.966 --rc geninfo_unexecuted_blocks=1 00:06:55.966 00:06:55.966 ' 00:06:55.966 05:59:21 event.event_scheduler -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:55.966 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:55.966 --rc genhtml_branch_coverage=1 00:06:55.966 --rc genhtml_function_coverage=1 00:06:55.966 --rc genhtml_legend=1 00:06:55.966 --rc geninfo_all_blocks=1 00:06:55.966 --rc geninfo_unexecuted_blocks=1 00:06:55.966 00:06:55.966 ' 00:06:55.966 05:59:21 event.event_scheduler -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:55.966 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:55.966 --rc genhtml_branch_coverage=1 00:06:55.966 --rc genhtml_function_coverage=1 00:06:55.966 --rc genhtml_legend=1 00:06:55.966 --rc geninfo_all_blocks=1 00:06:55.966 --rc geninfo_unexecuted_blocks=1 00:06:55.966 00:06:55.966 ' 00:06:55.966 05:59:21 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:55.966 05:59:21 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=70306 00:06:55.966 05:59:21 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:55.966 05:59:21 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 70306 00:06:55.966 05:59:21 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:55.966 05:59:21 event.event_scheduler -- common/autotest_common.sh@831 -- # '[' -z 70306 ']' 00:06:55.966 05:59:21 event.event_scheduler -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:55.966 05:59:21 event.event_scheduler -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:55.966 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:55.966 05:59:21 event.event_scheduler -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:55.966 05:59:21 event.event_scheduler -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:55.966 05:59:21 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:55.966 [2024-10-01 05:59:21.513378] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:06:55.966 [2024-10-01 05:59:21.513485] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70306 ] 00:06:56.226 [2024-10-01 05:59:21.649447] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:56.226 [2024-10-01 05:59:21.693607] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.226 [2024-10-01 05:59:21.693771] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:56.226 [2024-10-01 05:59:21.693809] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:06:56.226 [2024-10-01 05:59:21.693813] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:06:56.226 05:59:21 event.event_scheduler -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:56.226 05:59:21 event.event_scheduler -- common/autotest_common.sh@864 -- # return 0 00:06:56.226 05:59:21 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:56.226 05:59:21 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:56.226 05:59:21 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:56.226 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:56.226 POWER: Cannot set governor of lcore 0 to userspace 00:06:56.226 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:56.226 POWER: Cannot set governor of lcore 0 to performance 00:06:56.226 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:56.226 POWER: Cannot set governor of lcore 0 to userspace 00:06:56.226 GUEST_CHANNEL: Unable to to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:06:56.226 POWER: Unable to set Power Management Environment for lcore 0 00:06:56.226 [2024-10-01 05:59:21.803540] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:06:56.226 [2024-10-01 05:59:21.803555] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:06:56.226 [2024-10-01 05:59:21.803583] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:06:56.226 [2024-10-01 05:59:21.803601] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:56.226 [2024-10-01 05:59:21.803610] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:56.226 [2024-10-01 05:59:21.803619] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:56.226 05:59:21 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:56.226 05:59:21 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:56.226 05:59:21 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:56.226 05:59:21 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:56.486 [2024-10-01 05:59:21.842989] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:56.486 [2024-10-01 05:59:21.860096] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:56.486 05:59:21 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:56.486 05:59:21 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:56.486 05:59:21 event.event_scheduler -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:56.486 05:59:21 event.event_scheduler -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:56.486 05:59:21 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:56.486 ************************************ 00:06:56.486 START TEST scheduler_create_thread 00:06:56.486 ************************************ 00:06:56.486 05:59:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # scheduler_create_thread 00:06:56.486 05:59:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:56.486 05:59:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:56.486 05:59:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:56.486 2 00:06:56.486 05:59:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:56.486 05:59:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:56.486 05:59:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:56.486 05:59:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:56.486 3 00:06:56.486 05:59:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:56.486 05:59:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:56.486 05:59:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:56.486 05:59:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:56.486 4 00:06:56.486 05:59:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:56.486 05:59:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:56.486 05:59:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:56.486 05:59:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:56.486 5 00:06:56.486 05:59:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:56.486 05:59:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:56.486 05:59:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:56.486 05:59:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:56.486 6 00:06:56.486 05:59:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:56.486 05:59:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:56.486 05:59:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:56.486 05:59:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:56.486 7 00:06:56.486 05:59:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:56.486 05:59:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:56.486 05:59:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:56.486 05:59:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:56.486 8 00:06:56.486 05:59:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:56.486 05:59:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:56.486 05:59:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:56.486 05:59:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:56.486 9 00:06:56.486 05:59:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:56.486 05:59:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:56.486 05:59:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:56.486 05:59:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:56.486 10 00:06:56.486 05:59:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:56.486 05:59:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:56.486 05:59:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:56.486 05:59:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:56.486 05:59:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:56.486 05:59:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:56.486 05:59:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:56.486 05:59:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:56.486 05:59:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:57.423 05:59:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:57.423 05:59:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:57.423 05:59:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:57.423 05:59:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:58.801 05:59:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:58.802 05:59:24 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:58.802 05:59:24 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:58.802 05:59:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:58.802 05:59:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:59.739 05:59:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:59.739 00:06:59.739 real 0m3.375s 00:06:59.739 user 0m0.022s 00:06:59.739 sys 0m0.004s 00:06:59.739 05:59:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:59.739 05:59:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:59.739 ************************************ 00:06:59.739 END TEST scheduler_create_thread 00:06:59.739 ************************************ 00:06:59.739 05:59:25 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:59.739 05:59:25 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 70306 00:06:59.739 05:59:25 event.event_scheduler -- common/autotest_common.sh@950 -- # '[' -z 70306 ']' 00:06:59.739 05:59:25 event.event_scheduler -- common/autotest_common.sh@954 -- # kill -0 70306 00:06:59.739 05:59:25 event.event_scheduler -- common/autotest_common.sh@955 -- # uname 00:06:59.739 05:59:25 event.event_scheduler -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:59.739 05:59:25 event.event_scheduler -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70306 00:06:59.739 05:59:25 event.event_scheduler -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:06:59.739 05:59:25 event.event_scheduler -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:06:59.739 killing process with pid 70306 00:06:59.739 05:59:25 event.event_scheduler -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70306' 00:06:59.739 05:59:25 event.event_scheduler -- common/autotest_common.sh@969 -- # kill 70306 00:06:59.739 05:59:25 event.event_scheduler -- common/autotest_common.sh@974 -- # wait 70306 00:07:00.308 [2024-10-01 05:59:25.627868] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:07:00.308 00:07:00.308 real 0m4.529s 00:07:00.308 user 0m7.924s 00:07:00.308 sys 0m0.297s 00:07:00.308 05:59:25 event.event_scheduler -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:00.308 05:59:25 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:00.308 ************************************ 00:07:00.308 END TEST event_scheduler 00:07:00.308 ************************************ 00:07:00.308 05:59:25 event -- event/event.sh@51 -- # modprobe -n nbd 00:07:00.308 05:59:25 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:07:00.308 05:59:25 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:00.308 05:59:25 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:00.308 05:59:25 event -- common/autotest_common.sh@10 -- # set +x 00:07:00.308 ************************************ 00:07:00.308 START TEST app_repeat 00:07:00.308 ************************************ 00:07:00.308 05:59:25 event.app_repeat -- common/autotest_common.sh@1125 -- # app_repeat_test 00:07:00.308 05:59:25 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:00.308 05:59:25 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:00.308 05:59:25 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:07:00.308 05:59:25 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:00.308 05:59:25 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:07:00.308 05:59:25 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:07:00.308 05:59:25 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:07:00.308 05:59:25 event.app_repeat -- event/event.sh@19 -- # repeat_pid=70403 00:07:00.308 05:59:25 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:07:00.308 05:59:25 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:07:00.308 Process app_repeat pid: 70403 00:07:00.308 05:59:25 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 70403' 00:07:00.308 05:59:25 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:00.308 spdk_app_start Round 0 00:07:00.308 05:59:25 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:07:00.308 05:59:25 event.app_repeat -- event/event.sh@25 -- # waitforlisten 70403 /var/tmp/spdk-nbd.sock 00:07:00.308 05:59:25 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 70403 ']' 00:07:00.308 05:59:25 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:00.308 05:59:25 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:00.308 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:00.308 05:59:25 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:00.308 05:59:25 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:00.308 05:59:25 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:00.308 [2024-10-01 05:59:25.888458] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:07:00.309 [2024-10-01 05:59:25.888562] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70403 ] 00:07:00.568 [2024-10-01 05:59:26.018539] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:00.568 [2024-10-01 05:59:26.052879] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:00.568 [2024-10-01 05:59:26.052887] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.568 [2024-10-01 05:59:26.081664] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:00.568 05:59:26 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:00.568 05:59:26 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:07:00.568 05:59:26 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:01.135 Malloc0 00:07:01.135 05:59:26 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:01.135 Malloc1 00:07:01.395 05:59:26 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:01.396 05:59:26 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:01.396 05:59:26 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:01.396 05:59:26 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:01.396 05:59:26 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:01.396 05:59:26 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:01.396 05:59:26 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:01.396 05:59:26 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:01.396 05:59:26 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:01.396 05:59:26 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:01.396 05:59:26 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:01.396 05:59:26 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:01.396 05:59:26 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:01.396 05:59:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:01.396 05:59:26 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:01.396 05:59:26 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:01.655 /dev/nbd0 00:07:01.655 05:59:27 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:01.655 05:59:27 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:01.655 05:59:27 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:07:01.655 05:59:27 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:07:01.655 05:59:27 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:01.655 05:59:27 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:01.655 05:59:27 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:07:01.655 05:59:27 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:07:01.655 05:59:27 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:01.655 05:59:27 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:01.655 05:59:27 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:01.655 1+0 records in 00:07:01.655 1+0 records out 00:07:01.655 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000318962 s, 12.8 MB/s 00:07:01.655 05:59:27 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:01.655 05:59:27 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:07:01.655 05:59:27 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:01.655 05:59:27 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:01.655 05:59:27 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:07:01.655 05:59:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:01.655 05:59:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:01.656 05:59:27 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:01.915 /dev/nbd1 00:07:01.915 05:59:27 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:01.915 05:59:27 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:01.915 05:59:27 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:07:01.915 05:59:27 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:07:01.915 05:59:27 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:01.915 05:59:27 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:01.915 05:59:27 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:07:01.915 05:59:27 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:07:01.915 05:59:27 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:01.915 05:59:27 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:01.915 05:59:27 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:01.915 1+0 records in 00:07:01.915 1+0 records out 00:07:01.915 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000310612 s, 13.2 MB/s 00:07:01.915 05:59:27 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:01.915 05:59:27 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:07:01.915 05:59:27 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:01.915 05:59:27 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:01.915 05:59:27 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:07:01.915 05:59:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:01.915 05:59:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:01.915 05:59:27 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:01.915 05:59:27 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:01.915 05:59:27 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:02.174 05:59:27 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:02.174 { 00:07:02.174 "nbd_device": "/dev/nbd0", 00:07:02.174 "bdev_name": "Malloc0" 00:07:02.174 }, 00:07:02.174 { 00:07:02.174 "nbd_device": "/dev/nbd1", 00:07:02.174 "bdev_name": "Malloc1" 00:07:02.174 } 00:07:02.174 ]' 00:07:02.174 05:59:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:02.174 { 00:07:02.174 "nbd_device": "/dev/nbd0", 00:07:02.174 "bdev_name": "Malloc0" 00:07:02.174 }, 00:07:02.174 { 00:07:02.174 "nbd_device": "/dev/nbd1", 00:07:02.174 "bdev_name": "Malloc1" 00:07:02.174 } 00:07:02.174 ]' 00:07:02.174 05:59:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:02.174 05:59:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:02.174 /dev/nbd1' 00:07:02.174 05:59:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:02.174 /dev/nbd1' 00:07:02.174 05:59:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:02.174 05:59:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:02.174 05:59:27 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:02.174 05:59:27 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:02.174 05:59:27 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:02.174 05:59:27 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:02.174 05:59:27 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:02.174 05:59:27 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:02.175 05:59:27 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:02.175 05:59:27 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:02.175 05:59:27 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:02.175 05:59:27 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:02.175 256+0 records in 00:07:02.175 256+0 records out 00:07:02.175 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00723406 s, 145 MB/s 00:07:02.175 05:59:27 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:02.175 05:59:27 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:02.175 256+0 records in 00:07:02.175 256+0 records out 00:07:02.175 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0223478 s, 46.9 MB/s 00:07:02.175 05:59:27 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:02.175 05:59:27 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:02.175 256+0 records in 00:07:02.175 256+0 records out 00:07:02.175 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0239615 s, 43.8 MB/s 00:07:02.175 05:59:27 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:02.175 05:59:27 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:02.175 05:59:27 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:02.175 05:59:27 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:02.175 05:59:27 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:02.175 05:59:27 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:02.175 05:59:27 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:02.175 05:59:27 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:02.175 05:59:27 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:07:02.175 05:59:27 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:02.175 05:59:27 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:07:02.175 05:59:27 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:02.175 05:59:27 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:02.175 05:59:27 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:02.175 05:59:27 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:02.175 05:59:27 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:02.175 05:59:27 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:02.175 05:59:27 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:02.175 05:59:27 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:02.435 05:59:27 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:02.435 05:59:27 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:02.435 05:59:27 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:02.435 05:59:27 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:02.435 05:59:27 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:02.435 05:59:27 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:02.435 05:59:27 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:02.435 05:59:27 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:02.435 05:59:27 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:02.435 05:59:27 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:02.695 05:59:28 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:02.695 05:59:28 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:02.695 05:59:28 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:02.695 05:59:28 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:02.695 05:59:28 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:02.695 05:59:28 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:02.695 05:59:28 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:02.695 05:59:28 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:02.695 05:59:28 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:02.695 05:59:28 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:02.695 05:59:28 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:03.263 05:59:28 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:03.263 05:59:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:03.263 05:59:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:03.263 05:59:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:03.263 05:59:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:03.263 05:59:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:03.263 05:59:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:03.263 05:59:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:03.263 05:59:28 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:03.263 05:59:28 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:03.263 05:59:28 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:03.263 05:59:28 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:03.263 05:59:28 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:03.523 05:59:28 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:03.523 [2024-10-01 05:59:29.068867] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:03.523 [2024-10-01 05:59:29.101231] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:03.523 [2024-10-01 05:59:29.101242] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.523 [2024-10-01 05:59:29.129310] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:03.523 [2024-10-01 05:59:29.129417] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:03.523 [2024-10-01 05:59:29.129431] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:06.844 05:59:31 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:06.844 spdk_app_start Round 1 00:07:06.844 05:59:31 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:07:06.844 05:59:31 event.app_repeat -- event/event.sh@25 -- # waitforlisten 70403 /var/tmp/spdk-nbd.sock 00:07:06.844 05:59:31 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 70403 ']' 00:07:06.844 05:59:31 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:06.844 05:59:31 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:06.844 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:06.845 05:59:31 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:06.845 05:59:31 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:06.845 05:59:31 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:06.845 05:59:32 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:06.845 05:59:32 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:07:06.845 05:59:32 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:07.104 Malloc0 00:07:07.104 05:59:32 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:07.363 Malloc1 00:07:07.363 05:59:32 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:07.363 05:59:32 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:07.363 05:59:32 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:07.363 05:59:32 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:07.363 05:59:32 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:07.363 05:59:32 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:07.363 05:59:32 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:07.363 05:59:32 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:07.363 05:59:32 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:07.363 05:59:32 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:07.363 05:59:32 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:07.363 05:59:32 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:07.363 05:59:32 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:07.363 05:59:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:07.363 05:59:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:07.363 05:59:32 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:07.363 /dev/nbd0 00:07:07.622 05:59:32 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:07.622 05:59:32 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:07.622 05:59:32 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:07:07.622 05:59:32 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:07:07.622 05:59:32 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:07.622 05:59:32 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:07.622 05:59:32 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:07:07.622 05:59:32 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:07:07.622 05:59:32 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:07.622 05:59:32 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:07.622 05:59:32 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:07.622 1+0 records in 00:07:07.622 1+0 records out 00:07:07.622 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000270731 s, 15.1 MB/s 00:07:07.622 05:59:33 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:07.622 05:59:33 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:07:07.622 05:59:33 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:07.622 05:59:33 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:07.622 05:59:33 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:07:07.622 05:59:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:07.622 05:59:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:07.622 05:59:33 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:07.881 /dev/nbd1 00:07:07.881 05:59:33 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:07.881 05:59:33 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:07.881 05:59:33 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:07:07.881 05:59:33 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:07:07.881 05:59:33 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:07.881 05:59:33 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:07.881 05:59:33 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:07:07.881 05:59:33 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:07:07.881 05:59:33 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:07.881 05:59:33 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:07.881 05:59:33 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:07.881 1+0 records in 00:07:07.881 1+0 records out 00:07:07.881 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000331273 s, 12.4 MB/s 00:07:07.881 05:59:33 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:07.881 05:59:33 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:07:07.881 05:59:33 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:07.881 05:59:33 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:07.881 05:59:33 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:07:07.881 05:59:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:07.882 05:59:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:07.882 05:59:33 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:07.882 05:59:33 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:07.882 05:59:33 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:08.141 05:59:33 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:08.141 { 00:07:08.141 "nbd_device": "/dev/nbd0", 00:07:08.141 "bdev_name": "Malloc0" 00:07:08.141 }, 00:07:08.141 { 00:07:08.141 "nbd_device": "/dev/nbd1", 00:07:08.141 "bdev_name": "Malloc1" 00:07:08.141 } 00:07:08.141 ]' 00:07:08.141 05:59:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:08.141 05:59:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:08.141 { 00:07:08.141 "nbd_device": "/dev/nbd0", 00:07:08.141 "bdev_name": "Malloc0" 00:07:08.141 }, 00:07:08.141 { 00:07:08.141 "nbd_device": "/dev/nbd1", 00:07:08.141 "bdev_name": "Malloc1" 00:07:08.141 } 00:07:08.141 ]' 00:07:08.141 05:59:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:08.141 /dev/nbd1' 00:07:08.141 05:59:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:08.141 05:59:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:08.141 /dev/nbd1' 00:07:08.141 05:59:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:08.141 05:59:33 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:08.141 05:59:33 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:08.141 05:59:33 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:08.141 05:59:33 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:08.141 05:59:33 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:08.141 05:59:33 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:08.141 05:59:33 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:08.141 05:59:33 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:08.141 05:59:33 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:08.141 05:59:33 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:08.141 256+0 records in 00:07:08.141 256+0 records out 00:07:08.141 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.010633 s, 98.6 MB/s 00:07:08.141 05:59:33 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:08.141 05:59:33 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:08.141 256+0 records in 00:07:08.141 256+0 records out 00:07:08.141 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0172711 s, 60.7 MB/s 00:07:08.141 05:59:33 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:08.141 05:59:33 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:08.141 256+0 records in 00:07:08.141 256+0 records out 00:07:08.141 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0329409 s, 31.8 MB/s 00:07:08.400 05:59:33 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:08.400 05:59:33 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:08.400 05:59:33 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:08.400 05:59:33 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:08.400 05:59:33 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:08.400 05:59:33 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:08.400 05:59:33 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:08.400 05:59:33 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:08.400 05:59:33 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:07:08.400 05:59:33 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:08.400 05:59:33 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:07:08.400 05:59:33 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:08.400 05:59:33 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:08.400 05:59:33 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:08.400 05:59:33 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:08.400 05:59:33 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:08.400 05:59:33 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:08.400 05:59:33 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:08.400 05:59:33 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:08.659 05:59:34 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:08.659 05:59:34 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:08.659 05:59:34 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:08.659 05:59:34 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:08.659 05:59:34 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:08.659 05:59:34 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:08.659 05:59:34 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:08.659 05:59:34 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:08.659 05:59:34 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:08.659 05:59:34 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:08.918 05:59:34 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:08.918 05:59:34 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:08.918 05:59:34 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:08.918 05:59:34 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:08.918 05:59:34 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:08.918 05:59:34 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:08.918 05:59:34 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:08.918 05:59:34 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:08.918 05:59:34 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:08.918 05:59:34 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:08.918 05:59:34 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:09.176 05:59:34 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:09.176 05:59:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:09.176 05:59:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:09.176 05:59:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:09.176 05:59:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:09.176 05:59:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:09.176 05:59:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:09.176 05:59:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:09.176 05:59:34 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:09.176 05:59:34 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:09.176 05:59:34 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:09.177 05:59:34 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:09.177 05:59:34 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:09.435 05:59:34 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:09.435 [2024-10-01 05:59:35.043567] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:09.694 [2024-10-01 05:59:35.076745] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:09.694 [2024-10-01 05:59:35.076758] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.694 [2024-10-01 05:59:35.105758] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:09.694 [2024-10-01 05:59:35.105867] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:09.694 [2024-10-01 05:59:35.105879] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:12.980 05:59:37 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:12.980 spdk_app_start Round 2 00:07:12.980 05:59:37 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:07:12.980 05:59:37 event.app_repeat -- event/event.sh@25 -- # waitforlisten 70403 /var/tmp/spdk-nbd.sock 00:07:12.980 05:59:37 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 70403 ']' 00:07:12.980 05:59:37 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:12.980 05:59:37 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:12.980 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:12.980 05:59:37 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:12.980 05:59:37 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:12.980 05:59:37 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:12.980 05:59:38 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:12.980 05:59:38 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:07:12.980 05:59:38 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:12.980 Malloc0 00:07:12.980 05:59:38 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:13.240 Malloc1 00:07:13.240 05:59:38 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:13.240 05:59:38 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:13.240 05:59:38 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:13.240 05:59:38 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:13.240 05:59:38 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:13.240 05:59:38 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:13.240 05:59:38 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:13.240 05:59:38 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:13.240 05:59:38 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:13.240 05:59:38 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:13.240 05:59:38 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:13.240 05:59:38 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:13.240 05:59:38 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:13.240 05:59:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:13.240 05:59:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:13.240 05:59:38 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:13.500 /dev/nbd0 00:07:13.500 05:59:39 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:13.500 05:59:39 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:13.500 05:59:39 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:07:13.500 05:59:39 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:07:13.500 05:59:39 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:13.500 05:59:39 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:13.500 05:59:39 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:07:13.500 05:59:39 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:07:13.500 05:59:39 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:13.500 05:59:39 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:13.500 05:59:39 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:13.500 1+0 records in 00:07:13.500 1+0 records out 00:07:13.500 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000139671 s, 29.3 MB/s 00:07:13.500 05:59:39 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:13.500 05:59:39 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:07:13.500 05:59:39 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:13.500 05:59:39 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:13.500 05:59:39 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:07:13.500 05:59:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:13.500 05:59:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:13.500 05:59:39 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:13.760 /dev/nbd1 00:07:13.760 05:59:39 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:13.760 05:59:39 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:13.760 05:59:39 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:07:13.760 05:59:39 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:07:13.760 05:59:39 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:13.760 05:59:39 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:13.760 05:59:39 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:07:13.760 05:59:39 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:07:13.760 05:59:39 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:13.760 05:59:39 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:13.760 05:59:39 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:13.760 1+0 records in 00:07:13.760 1+0 records out 00:07:13.760 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000311811 s, 13.1 MB/s 00:07:13.760 05:59:39 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:13.760 05:59:39 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:07:13.760 05:59:39 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:13.760 05:59:39 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:13.760 05:59:39 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:07:13.760 05:59:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:13.760 05:59:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:13.760 05:59:39 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:13.760 05:59:39 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:13.760 05:59:39 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:14.019 05:59:39 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:14.019 { 00:07:14.019 "nbd_device": "/dev/nbd0", 00:07:14.019 "bdev_name": "Malloc0" 00:07:14.019 }, 00:07:14.019 { 00:07:14.019 "nbd_device": "/dev/nbd1", 00:07:14.019 "bdev_name": "Malloc1" 00:07:14.019 } 00:07:14.019 ]' 00:07:14.019 05:59:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:14.019 { 00:07:14.019 "nbd_device": "/dev/nbd0", 00:07:14.019 "bdev_name": "Malloc0" 00:07:14.019 }, 00:07:14.019 { 00:07:14.019 "nbd_device": "/dev/nbd1", 00:07:14.019 "bdev_name": "Malloc1" 00:07:14.019 } 00:07:14.019 ]' 00:07:14.019 05:59:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:14.019 05:59:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:14.019 /dev/nbd1' 00:07:14.019 05:59:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:14.019 /dev/nbd1' 00:07:14.019 05:59:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:14.019 05:59:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:14.019 05:59:39 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:14.019 05:59:39 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:14.019 05:59:39 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:14.019 05:59:39 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:14.019 05:59:39 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:14.019 05:59:39 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:14.019 05:59:39 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:14.019 05:59:39 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:14.019 05:59:39 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:14.019 05:59:39 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:14.279 256+0 records in 00:07:14.279 256+0 records out 00:07:14.279 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0106789 s, 98.2 MB/s 00:07:14.279 05:59:39 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:14.279 05:59:39 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:14.279 256+0 records in 00:07:14.279 256+0 records out 00:07:14.279 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0201644 s, 52.0 MB/s 00:07:14.279 05:59:39 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:14.279 05:59:39 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:14.279 256+0 records in 00:07:14.279 256+0 records out 00:07:14.279 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0275134 s, 38.1 MB/s 00:07:14.279 05:59:39 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:14.279 05:59:39 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:14.279 05:59:39 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:14.279 05:59:39 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:14.279 05:59:39 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:14.279 05:59:39 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:14.279 05:59:39 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:14.279 05:59:39 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:14.279 05:59:39 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:07:14.279 05:59:39 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:14.279 05:59:39 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:07:14.279 05:59:39 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:14.279 05:59:39 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:14.279 05:59:39 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:14.279 05:59:39 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:14.279 05:59:39 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:14.279 05:59:39 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:14.279 05:59:39 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:14.279 05:59:39 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:14.537 05:59:40 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:14.537 05:59:40 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:14.537 05:59:40 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:14.537 05:59:40 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:14.537 05:59:40 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:14.537 05:59:40 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:14.537 05:59:40 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:14.537 05:59:40 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:14.537 05:59:40 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:14.537 05:59:40 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:14.795 05:59:40 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:14.795 05:59:40 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:14.795 05:59:40 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:14.795 05:59:40 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:14.795 05:59:40 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:14.795 05:59:40 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:14.795 05:59:40 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:14.795 05:59:40 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:14.795 05:59:40 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:14.795 05:59:40 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:14.795 05:59:40 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:15.052 05:59:40 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:15.052 05:59:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:15.053 05:59:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:15.053 05:59:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:15.053 05:59:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:15.053 05:59:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:15.053 05:59:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:15.053 05:59:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:15.053 05:59:40 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:15.053 05:59:40 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:15.053 05:59:40 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:15.053 05:59:40 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:15.053 05:59:40 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:15.620 05:59:40 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:15.620 [2024-10-01 05:59:41.054407] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:15.620 [2024-10-01 05:59:41.088951] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:15.620 [2024-10-01 05:59:41.088962] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.620 [2024-10-01 05:59:41.117210] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:15.620 [2024-10-01 05:59:41.117340] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:15.620 [2024-10-01 05:59:41.117353] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:18.907 05:59:43 event.app_repeat -- event/event.sh@38 -- # waitforlisten 70403 /var/tmp/spdk-nbd.sock 00:07:18.907 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:18.907 05:59:43 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 70403 ']' 00:07:18.907 05:59:43 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:18.907 05:59:43 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:18.907 05:59:43 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:18.907 05:59:43 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:18.907 05:59:43 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:18.907 05:59:44 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:18.907 05:59:44 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:07:18.907 05:59:44 event.app_repeat -- event/event.sh@39 -- # killprocess 70403 00:07:18.907 05:59:44 event.app_repeat -- common/autotest_common.sh@950 -- # '[' -z 70403 ']' 00:07:18.907 05:59:44 event.app_repeat -- common/autotest_common.sh@954 -- # kill -0 70403 00:07:18.907 05:59:44 event.app_repeat -- common/autotest_common.sh@955 -- # uname 00:07:18.907 05:59:44 event.app_repeat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:18.907 05:59:44 event.app_repeat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70403 00:07:18.907 killing process with pid 70403 00:07:18.907 05:59:44 event.app_repeat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:18.907 05:59:44 event.app_repeat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:18.907 05:59:44 event.app_repeat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70403' 00:07:18.907 05:59:44 event.app_repeat -- common/autotest_common.sh@969 -- # kill 70403 00:07:18.907 05:59:44 event.app_repeat -- common/autotest_common.sh@974 -- # wait 70403 00:07:18.907 spdk_app_start is called in Round 0. 00:07:18.907 Shutdown signal received, stop current app iteration 00:07:18.907 Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 reinitialization... 00:07:18.907 spdk_app_start is called in Round 1. 00:07:18.907 Shutdown signal received, stop current app iteration 00:07:18.907 Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 reinitialization... 00:07:18.907 spdk_app_start is called in Round 2. 00:07:18.907 Shutdown signal received, stop current app iteration 00:07:18.907 Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 reinitialization... 00:07:18.907 spdk_app_start is called in Round 3. 00:07:18.907 Shutdown signal received, stop current app iteration 00:07:18.907 05:59:44 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:07:18.907 05:59:44 event.app_repeat -- event/event.sh@42 -- # return 0 00:07:18.907 00:07:18.907 real 0m18.513s 00:07:18.907 user 0m42.467s 00:07:18.907 sys 0m2.506s 00:07:18.907 05:59:44 event.app_repeat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:18.907 ************************************ 00:07:18.907 END TEST app_repeat 00:07:18.907 ************************************ 00:07:18.907 05:59:44 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:18.907 05:59:44 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:07:18.907 05:59:44 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:07:18.907 05:59:44 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:18.907 05:59:44 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:18.907 05:59:44 event -- common/autotest_common.sh@10 -- # set +x 00:07:18.907 ************************************ 00:07:18.907 START TEST cpu_locks 00:07:18.907 ************************************ 00:07:18.907 05:59:44 event.cpu_locks -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:07:18.907 * Looking for test storage... 00:07:18.907 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:07:18.907 05:59:44 event.cpu_locks -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:18.907 05:59:44 event.cpu_locks -- common/autotest_common.sh@1681 -- # lcov --version 00:07:18.907 05:59:44 event.cpu_locks -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:19.167 05:59:44 event.cpu_locks -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:19.167 05:59:44 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:19.167 05:59:44 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:19.168 05:59:44 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:19.168 05:59:44 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:07:19.168 05:59:44 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:07:19.168 05:59:44 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:07:19.168 05:59:44 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:07:19.168 05:59:44 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:07:19.168 05:59:44 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:07:19.168 05:59:44 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:07:19.168 05:59:44 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:19.168 05:59:44 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:07:19.168 05:59:44 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:07:19.168 05:59:44 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:19.168 05:59:44 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:19.168 05:59:44 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:07:19.168 05:59:44 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:07:19.168 05:59:44 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:19.168 05:59:44 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:07:19.168 05:59:44 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:07:19.168 05:59:44 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:07:19.168 05:59:44 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:07:19.168 05:59:44 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:19.168 05:59:44 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:07:19.168 05:59:44 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:07:19.168 05:59:44 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:19.168 05:59:44 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:19.168 05:59:44 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:07:19.168 05:59:44 event.cpu_locks -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:19.168 05:59:44 event.cpu_locks -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:19.168 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:19.168 --rc genhtml_branch_coverage=1 00:07:19.168 --rc genhtml_function_coverage=1 00:07:19.168 --rc genhtml_legend=1 00:07:19.168 --rc geninfo_all_blocks=1 00:07:19.168 --rc geninfo_unexecuted_blocks=1 00:07:19.168 00:07:19.168 ' 00:07:19.168 05:59:44 event.cpu_locks -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:19.168 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:19.168 --rc genhtml_branch_coverage=1 00:07:19.168 --rc genhtml_function_coverage=1 00:07:19.168 --rc genhtml_legend=1 00:07:19.168 --rc geninfo_all_blocks=1 00:07:19.168 --rc geninfo_unexecuted_blocks=1 00:07:19.168 00:07:19.168 ' 00:07:19.168 05:59:44 event.cpu_locks -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:19.168 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:19.168 --rc genhtml_branch_coverage=1 00:07:19.168 --rc genhtml_function_coverage=1 00:07:19.168 --rc genhtml_legend=1 00:07:19.168 --rc geninfo_all_blocks=1 00:07:19.168 --rc geninfo_unexecuted_blocks=1 00:07:19.168 00:07:19.168 ' 00:07:19.168 05:59:44 event.cpu_locks -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:19.168 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:19.168 --rc genhtml_branch_coverage=1 00:07:19.168 --rc genhtml_function_coverage=1 00:07:19.168 --rc genhtml_legend=1 00:07:19.168 --rc geninfo_all_blocks=1 00:07:19.168 --rc geninfo_unexecuted_blocks=1 00:07:19.168 00:07:19.168 ' 00:07:19.168 05:59:44 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:07:19.168 05:59:44 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:07:19.168 05:59:44 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:07:19.168 05:59:44 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:07:19.168 05:59:44 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:19.168 05:59:44 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:19.168 05:59:44 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:19.168 ************************************ 00:07:19.168 START TEST default_locks 00:07:19.168 ************************************ 00:07:19.168 05:59:44 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # default_locks 00:07:19.168 05:59:44 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=70838 00:07:19.168 05:59:44 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 70838 00:07:19.168 05:59:44 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 70838 ']' 00:07:19.168 05:59:44 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:19.168 05:59:44 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:19.168 05:59:44 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:19.168 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:19.168 05:59:44 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:19.168 05:59:44 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:19.168 05:59:44 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:19.168 [2024-10-01 05:59:44.678123] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:07:19.168 [2024-10-01 05:59:44.678234] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70838 ] 00:07:19.428 [2024-10-01 05:59:44.813805] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:19.428 [2024-10-01 05:59:44.850097] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.428 [2024-10-01 05:59:44.888718] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:19.428 05:59:45 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:19.428 05:59:45 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 0 00:07:19.428 05:59:45 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 70838 00:07:19.428 05:59:45 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 70838 00:07:19.428 05:59:45 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:19.999 05:59:45 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 70838 00:07:19.999 05:59:45 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # '[' -z 70838 ']' 00:07:19.999 05:59:45 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # kill -0 70838 00:07:19.999 05:59:45 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # uname 00:07:19.999 05:59:45 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:19.999 05:59:45 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70838 00:07:19.999 05:59:45 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:19.999 05:59:45 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:19.999 killing process with pid 70838 00:07:19.999 05:59:45 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70838' 00:07:19.999 05:59:45 event.cpu_locks.default_locks -- common/autotest_common.sh@969 -- # kill 70838 00:07:19.999 05:59:45 event.cpu_locks.default_locks -- common/autotest_common.sh@974 -- # wait 70838 00:07:20.259 05:59:45 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 70838 00:07:20.259 05:59:45 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:07:20.259 05:59:45 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 70838 00:07:20.259 05:59:45 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:07:20.259 05:59:45 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:20.259 05:59:45 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:07:20.259 05:59:45 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:20.259 05:59:45 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 70838 00:07:20.259 05:59:45 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 70838 ']' 00:07:20.259 05:59:45 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:20.259 05:59:45 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:20.259 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:20.259 05:59:45 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:20.259 05:59:45 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:20.259 05:59:45 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:20.259 ERROR: process (pid: 70838) is no longer running 00:07:20.259 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (70838) - No such process 00:07:20.259 05:59:45 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:20.259 05:59:45 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 1 00:07:20.259 05:59:45 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:07:20.259 05:59:45 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:20.259 05:59:45 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:20.259 05:59:45 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:20.259 05:59:45 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:07:20.259 05:59:45 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:20.259 05:59:45 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:07:20.259 05:59:45 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:20.259 00:07:20.259 real 0m1.029s 00:07:20.259 user 0m1.079s 00:07:20.259 sys 0m0.395s 00:07:20.259 05:59:45 event.cpu_locks.default_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:20.259 ************************************ 00:07:20.259 END TEST default_locks 00:07:20.259 ************************************ 00:07:20.259 05:59:45 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:20.259 05:59:45 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:07:20.259 05:59:45 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:20.259 05:59:45 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:20.259 05:59:45 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:20.259 ************************************ 00:07:20.259 START TEST default_locks_via_rpc 00:07:20.259 ************************************ 00:07:20.259 05:59:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # default_locks_via_rpc 00:07:20.259 05:59:45 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=70877 00:07:20.259 05:59:45 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:20.259 05:59:45 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 70877 00:07:20.259 05:59:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 70877 ']' 00:07:20.259 05:59:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:20.259 05:59:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:20.259 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:20.259 05:59:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:20.259 05:59:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:20.259 05:59:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:20.259 [2024-10-01 05:59:45.750125] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:07:20.259 [2024-10-01 05:59:45.750218] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70877 ] 00:07:20.518 [2024-10-01 05:59:45.882046] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:20.518 [2024-10-01 05:59:45.915898] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.518 [2024-10-01 05:59:45.950949] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:20.518 05:59:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:20.518 05:59:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:20.518 05:59:46 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:07:20.518 05:59:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:20.518 05:59:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:20.518 05:59:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:20.518 05:59:46 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:07:20.518 05:59:46 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:20.518 05:59:46 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:07:20.518 05:59:46 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:20.518 05:59:46 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:07:20.518 05:59:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:20.518 05:59:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:20.518 05:59:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:20.518 05:59:46 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 70877 00:07:20.518 05:59:46 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 70877 00:07:20.518 05:59:46 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:21.087 05:59:46 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 70877 00:07:21.087 05:59:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # '[' -z 70877 ']' 00:07:21.087 05:59:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # kill -0 70877 00:07:21.087 05:59:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # uname 00:07:21.087 05:59:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:21.087 05:59:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70877 00:07:21.087 05:59:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:21.087 05:59:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:21.087 killing process with pid 70877 00:07:21.087 05:59:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70877' 00:07:21.087 05:59:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@969 -- # kill 70877 00:07:21.087 05:59:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@974 -- # wait 70877 00:07:21.347 00:07:21.347 real 0m1.125s 00:07:21.347 user 0m1.220s 00:07:21.347 sys 0m0.439s 00:07:21.347 05:59:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:21.347 05:59:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:21.347 ************************************ 00:07:21.347 END TEST default_locks_via_rpc 00:07:21.347 ************************************ 00:07:21.347 05:59:46 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:07:21.347 05:59:46 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:21.347 05:59:46 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:21.347 05:59:46 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:21.347 ************************************ 00:07:21.347 START TEST non_locking_app_on_locked_coremask 00:07:21.347 ************************************ 00:07:21.347 05:59:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # non_locking_app_on_locked_coremask 00:07:21.347 05:59:46 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=70915 00:07:21.347 05:59:46 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 70915 /var/tmp/spdk.sock 00:07:21.347 05:59:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 70915 ']' 00:07:21.347 05:59:46 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:21.347 05:59:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:21.347 05:59:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:21.347 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:21.347 05:59:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:21.347 05:59:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:21.347 05:59:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:21.347 [2024-10-01 05:59:46.941551] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:07:21.347 [2024-10-01 05:59:46.941654] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70915 ] 00:07:21.607 [2024-10-01 05:59:47.075854] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.607 [2024-10-01 05:59:47.109519] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.607 [2024-10-01 05:59:47.145350] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:21.867 05:59:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:21.867 05:59:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:07:21.867 05:59:47 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=70929 00:07:21.867 05:59:47 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:07:21.867 05:59:47 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 70929 /var/tmp/spdk2.sock 00:07:21.867 05:59:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 70929 ']' 00:07:21.867 05:59:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:21.867 05:59:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:21.867 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:21.867 05:59:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:21.867 05:59:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:21.867 05:59:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:21.867 [2024-10-01 05:59:47.333175] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:07:21.867 [2024-10-01 05:59:47.333281] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70929 ] 00:07:21.867 [2024-10-01 05:59:47.470916] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:21.867 [2024-10-01 05:59:47.470966] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:22.127 [2024-10-01 05:59:47.539704] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.127 [2024-10-01 05:59:47.610466] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:22.386 05:59:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:22.386 05:59:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:07:22.386 05:59:47 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 70915 00:07:22.386 05:59:47 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 70915 00:07:22.386 05:59:47 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:23.325 05:59:48 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 70915 00:07:23.325 05:59:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 70915 ']' 00:07:23.325 05:59:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 70915 00:07:23.325 05:59:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:07:23.325 05:59:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:23.325 05:59:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70915 00:07:23.325 05:59:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:23.325 05:59:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:23.325 killing process with pid 70915 00:07:23.325 05:59:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70915' 00:07:23.325 05:59:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 70915 00:07:23.325 05:59:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 70915 00:07:23.895 05:59:49 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 70929 00:07:23.895 05:59:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 70929 ']' 00:07:23.895 05:59:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 70929 00:07:23.895 05:59:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:07:23.895 05:59:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:23.895 05:59:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70929 00:07:23.895 05:59:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:23.895 05:59:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:23.895 killing process with pid 70929 00:07:23.895 05:59:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70929' 00:07:23.895 05:59:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 70929 00:07:23.895 05:59:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 70929 00:07:24.155 00:07:24.155 real 0m2.650s 00:07:24.155 user 0m2.971s 00:07:24.155 sys 0m0.906s 00:07:24.155 05:59:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:24.155 05:59:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:24.155 ************************************ 00:07:24.155 END TEST non_locking_app_on_locked_coremask 00:07:24.155 ************************************ 00:07:24.155 05:59:49 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:07:24.155 05:59:49 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:24.155 05:59:49 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:24.155 05:59:49 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:24.155 ************************************ 00:07:24.155 START TEST locking_app_on_unlocked_coremask 00:07:24.155 ************************************ 00:07:24.155 05:59:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_unlocked_coremask 00:07:24.155 05:59:49 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=70984 00:07:24.155 05:59:49 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 70984 /var/tmp/spdk.sock 00:07:24.155 05:59:49 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:07:24.155 05:59:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 70984 ']' 00:07:24.155 05:59:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:24.155 05:59:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:24.155 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:24.155 05:59:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:24.155 05:59:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:24.155 05:59:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:24.155 [2024-10-01 05:59:49.642670] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:07:24.155 [2024-10-01 05:59:49.642770] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70984 ] 00:07:24.415 [2024-10-01 05:59:49.781037] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:24.415 [2024-10-01 05:59:49.781091] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:24.415 [2024-10-01 05:59:49.816843] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.415 [2024-10-01 05:59:49.851714] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:24.415 05:59:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:24.415 05:59:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:07:24.415 05:59:49 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=70987 00:07:24.415 05:59:49 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 70987 /var/tmp/spdk2.sock 00:07:24.415 05:59:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 70987 ']' 00:07:24.415 05:59:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:24.415 05:59:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:24.415 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:24.415 05:59:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:24.415 05:59:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:24.415 05:59:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:24.415 05:59:49 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:24.674 [2024-10-01 05:59:50.031431] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:07:24.674 [2024-10-01 05:59:50.031537] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70987 ] 00:07:24.674 [2024-10-01 05:59:50.171429] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:24.674 [2024-10-01 05:59:50.241401] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.934 [2024-10-01 05:59:50.312940] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:25.503 05:59:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:25.503 05:59:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:07:25.503 05:59:50 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 70987 00:07:25.503 05:59:50 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:25.503 05:59:50 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 70987 00:07:26.441 05:59:51 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 70984 00:07:26.441 05:59:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 70984 ']' 00:07:26.441 05:59:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 70984 00:07:26.441 05:59:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:07:26.441 05:59:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:26.441 05:59:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70984 00:07:26.441 killing process with pid 70984 00:07:26.441 05:59:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:26.441 05:59:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:26.441 05:59:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70984' 00:07:26.441 05:59:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 70984 00:07:26.441 05:59:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 70984 00:07:27.037 05:59:52 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 70987 00:07:27.037 05:59:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 70987 ']' 00:07:27.037 05:59:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 70987 00:07:27.037 05:59:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:07:27.037 05:59:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:27.037 05:59:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70987 00:07:27.037 05:59:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:27.037 05:59:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:27.037 killing process with pid 70987 00:07:27.037 05:59:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70987' 00:07:27.037 05:59:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 70987 00:07:27.037 05:59:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 70987 00:07:27.309 ************************************ 00:07:27.309 END TEST locking_app_on_unlocked_coremask 00:07:27.309 ************************************ 00:07:27.309 00:07:27.309 real 0m3.084s 00:07:27.309 user 0m3.530s 00:07:27.309 sys 0m0.940s 00:07:27.309 05:59:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:27.309 05:59:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:27.309 05:59:52 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:07:27.309 05:59:52 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:27.309 05:59:52 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:27.309 05:59:52 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:27.309 ************************************ 00:07:27.309 START TEST locking_app_on_locked_coremask 00:07:27.309 ************************************ 00:07:27.309 05:59:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_locked_coremask 00:07:27.309 05:59:52 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=71054 00:07:27.309 05:59:52 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 71054 /var/tmp/spdk.sock 00:07:27.309 05:59:52 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:27.309 05:59:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 71054 ']' 00:07:27.309 05:59:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:27.309 05:59:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:27.309 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:27.309 05:59:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:27.309 05:59:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:27.309 05:59:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:27.309 [2024-10-01 05:59:52.777803] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:07:27.309 [2024-10-01 05:59:52.777929] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71054 ] 00:07:27.309 [2024-10-01 05:59:52.915793] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.568 [2024-10-01 05:59:52.949838] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.568 [2024-10-01 05:59:52.986570] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:27.568 05:59:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:27.568 05:59:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:07:27.568 05:59:53 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=71057 00:07:27.568 05:59:53 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:27.568 05:59:53 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 71057 /var/tmp/spdk2.sock 00:07:27.568 05:59:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:07:27.568 05:59:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 71057 /var/tmp/spdk2.sock 00:07:27.568 05:59:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:07:27.568 05:59:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:27.568 05:59:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:07:27.568 05:59:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:27.568 05:59:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 71057 /var/tmp/spdk2.sock 00:07:27.568 05:59:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 71057 ']' 00:07:27.568 05:59:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:27.568 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:27.568 05:59:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:27.568 05:59:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:27.568 05:59:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:27.568 05:59:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:27.568 [2024-10-01 05:59:53.175316] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:07:27.568 [2024-10-01 05:59:53.175412] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71057 ] 00:07:27.828 [2024-10-01 05:59:53.313386] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 71054 has claimed it. 00:07:27.828 [2024-10-01 05:59:53.313455] app.c: 910:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:28.396 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (71057) - No such process 00:07:28.396 ERROR: process (pid: 71057) is no longer running 00:07:28.396 05:59:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:28.396 05:59:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 1 00:07:28.396 05:59:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:07:28.396 05:59:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:28.396 05:59:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:28.396 05:59:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:28.396 05:59:53 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 71054 00:07:28.396 05:59:53 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:28.396 05:59:53 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 71054 00:07:28.965 05:59:54 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 71054 00:07:28.965 05:59:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 71054 ']' 00:07:28.965 05:59:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 71054 00:07:28.965 05:59:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:07:28.965 05:59:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:28.965 05:59:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71054 00:07:28.965 05:59:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:28.965 killing process with pid 71054 00:07:28.965 05:59:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:28.965 05:59:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71054' 00:07:28.965 05:59:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 71054 00:07:28.965 05:59:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 71054 00:07:29.225 00:07:29.225 real 0m1.960s 00:07:29.225 user 0m2.310s 00:07:29.225 sys 0m0.541s 00:07:29.225 05:59:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:29.225 05:59:54 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:29.225 ************************************ 00:07:29.225 END TEST locking_app_on_locked_coremask 00:07:29.225 ************************************ 00:07:29.225 05:59:54 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:07:29.225 05:59:54 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:29.225 05:59:54 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:29.225 05:59:54 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:29.225 ************************************ 00:07:29.225 START TEST locking_overlapped_coremask 00:07:29.225 ************************************ 00:07:29.225 05:59:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask 00:07:29.225 05:59:54 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=71108 00:07:29.225 05:59:54 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:07:29.225 05:59:54 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 71108 /var/tmp/spdk.sock 00:07:29.225 05:59:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 71108 ']' 00:07:29.225 05:59:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:29.225 05:59:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:29.225 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:29.225 05:59:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:29.225 05:59:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:29.225 05:59:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:29.225 [2024-10-01 05:59:54.792158] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:07:29.225 [2024-10-01 05:59:54.792262] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71108 ] 00:07:29.484 [2024-10-01 05:59:54.929989] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:29.484 [2024-10-01 05:59:54.965095] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:29.484 [2024-10-01 05:59:54.965224] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:07:29.484 [2024-10-01 05:59:54.965228] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.484 [2024-10-01 05:59:55.001709] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:29.743 05:59:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:29.743 05:59:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 0 00:07:29.743 05:59:55 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=71113 00:07:29.743 05:59:55 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 71113 /var/tmp/spdk2.sock 00:07:29.743 05:59:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:07:29.743 05:59:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 71113 /var/tmp/spdk2.sock 00:07:29.743 05:59:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:07:29.743 05:59:55 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:07:29.744 05:59:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:29.744 05:59:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:07:29.744 05:59:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:29.744 05:59:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 71113 /var/tmp/spdk2.sock 00:07:29.744 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:29.744 05:59:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 71113 ']' 00:07:29.744 05:59:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:29.744 05:59:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:29.744 05:59:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:29.744 05:59:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:29.744 05:59:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:29.744 [2024-10-01 05:59:55.181496] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:07:29.744 [2024-10-01 05:59:55.181592] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71113 ] 00:07:29.744 [2024-10-01 05:59:55.331874] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 71108 has claimed it. 00:07:29.744 [2024-10-01 05:59:55.332013] app.c: 910:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:30.312 ERROR: process (pid: 71113) is no longer running 00:07:30.312 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (71113) - No such process 00:07:30.312 05:59:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:30.312 05:59:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 1 00:07:30.312 05:59:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:07:30.312 05:59:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:30.312 05:59:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:30.312 05:59:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:30.312 05:59:55 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:07:30.312 05:59:55 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:30.312 05:59:55 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:30.312 05:59:55 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:30.312 05:59:55 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 71108 00:07:30.312 05:59:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # '[' -z 71108 ']' 00:07:30.312 05:59:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # kill -0 71108 00:07:30.312 05:59:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # uname 00:07:30.312 05:59:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:30.312 05:59:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71108 00:07:30.572 05:59:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:30.572 killing process with pid 71108 00:07:30.572 05:59:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:30.572 05:59:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71108' 00:07:30.572 05:59:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@969 -- # kill 71108 00:07:30.572 05:59:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@974 -- # wait 71108 00:07:30.572 00:07:30.572 real 0m1.455s 00:07:30.572 user 0m4.013s 00:07:30.572 sys 0m0.309s 00:07:30.572 05:59:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:30.572 05:59:56 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:30.572 ************************************ 00:07:30.572 END TEST locking_overlapped_coremask 00:07:30.572 ************************************ 00:07:30.832 05:59:56 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:07:30.832 05:59:56 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:30.832 05:59:56 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:30.832 05:59:56 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:30.832 ************************************ 00:07:30.832 START TEST locking_overlapped_coremask_via_rpc 00:07:30.832 ************************************ 00:07:30.832 05:59:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask_via_rpc 00:07:30.832 05:59:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=71156 00:07:30.832 05:59:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 71156 /var/tmp/spdk.sock 00:07:30.832 05:59:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:07:30.832 05:59:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 71156 ']' 00:07:30.832 05:59:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:30.832 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:30.832 05:59:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:30.832 05:59:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:30.832 05:59:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:30.832 05:59:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:30.832 [2024-10-01 05:59:56.300212] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:07:30.832 [2024-10-01 05:59:56.300327] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71156 ] 00:07:30.832 [2024-10-01 05:59:56.434823] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:30.832 [2024-10-01 05:59:56.434872] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:31.092 [2024-10-01 05:59:56.473921] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:31.092 [2024-10-01 05:59:56.474058] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:07:31.092 [2024-10-01 05:59:56.474064] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:31.092 [2024-10-01 05:59:56.514568] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:32.028 05:59:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:32.028 05:59:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:32.028 05:59:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:07:32.028 05:59:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=71179 00:07:32.028 05:59:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 71179 /var/tmp/spdk2.sock 00:07:32.028 05:59:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 71179 ']' 00:07:32.028 05:59:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:32.028 05:59:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:32.028 05:59:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:32.028 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:32.028 05:59:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:32.028 05:59:57 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:32.028 [2024-10-01 05:59:57.326420] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:07:32.028 [2024-10-01 05:59:57.326681] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71179 ] 00:07:32.028 [2024-10-01 05:59:57.467010] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:32.028 [2024-10-01 05:59:57.467226] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:32.028 [2024-10-01 05:59:57.533680] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:07:32.028 [2024-10-01 05:59:57.537060] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:07:32.028 [2024-10-01 05:59:57.537062] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:07:32.028 [2024-10-01 05:59:57.608618] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:32.962 05:59:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:32.962 05:59:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:32.962 05:59:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:07:32.962 05:59:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:32.962 05:59:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:32.962 05:59:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:32.962 05:59:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:32.962 05:59:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:07:32.962 05:59:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:32.962 05:59:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:07:32.962 05:59:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:32.962 05:59:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:07:32.962 05:59:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:32.962 05:59:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:32.962 05:59:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:32.962 05:59:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:32.962 [2024-10-01 05:59:58.363190] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 71156 has claimed it. 00:07:32.962 request: 00:07:32.962 { 00:07:32.962 "method": "framework_enable_cpumask_locks", 00:07:32.962 "req_id": 1 00:07:32.962 } 00:07:32.962 Got JSON-RPC error response 00:07:32.962 response: 00:07:32.962 { 00:07:32.962 "code": -32603, 00:07:32.962 "message": "Failed to claim CPU core: 2" 00:07:32.962 } 00:07:32.962 05:59:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:07:32.962 05:59:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:07:32.962 05:59:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:32.962 05:59:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:32.962 05:59:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:32.962 05:59:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 71156 /var/tmp/spdk.sock 00:07:32.962 05:59:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 71156 ']' 00:07:32.962 05:59:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:32.962 05:59:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:32.962 05:59:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:32.962 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:32.962 05:59:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:32.962 05:59:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:33.221 05:59:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:33.221 05:59:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:33.221 05:59:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 71179 /var/tmp/spdk2.sock 00:07:33.221 05:59:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 71179 ']' 00:07:33.221 05:59:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:33.221 05:59:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:33.221 05:59:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:33.221 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:33.221 05:59:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:33.221 05:59:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:33.479 ************************************ 00:07:33.479 END TEST locking_overlapped_coremask_via_rpc 00:07:33.479 ************************************ 00:07:33.480 05:59:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:33.480 05:59:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:33.480 05:59:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:07:33.480 05:59:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:33.480 05:59:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:33.480 05:59:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:33.480 00:07:33.480 real 0m2.723s 00:07:33.480 user 0m1.458s 00:07:33.480 sys 0m0.188s 00:07:33.480 05:59:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:33.480 05:59:58 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:33.480 05:59:58 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:07:33.480 05:59:58 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 71156 ]] 00:07:33.480 05:59:58 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 71156 00:07:33.480 05:59:58 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 71156 ']' 00:07:33.480 05:59:58 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 71156 00:07:33.480 05:59:58 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:07:33.480 05:59:59 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:33.480 05:59:59 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71156 00:07:33.480 killing process with pid 71156 00:07:33.480 05:59:59 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:33.480 05:59:59 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:33.480 05:59:59 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71156' 00:07:33.480 05:59:59 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 71156 00:07:33.480 05:59:59 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 71156 00:07:33.738 05:59:59 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 71179 ]] 00:07:33.738 05:59:59 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 71179 00:07:33.738 05:59:59 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 71179 ']' 00:07:33.738 05:59:59 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 71179 00:07:33.738 05:59:59 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:07:33.738 05:59:59 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:33.738 05:59:59 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71179 00:07:33.738 killing process with pid 71179 00:07:33.738 05:59:59 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:07:33.738 05:59:59 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:07:33.738 05:59:59 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71179' 00:07:33.738 05:59:59 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 71179 00:07:33.738 05:59:59 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 71179 00:07:34.305 05:59:59 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:34.305 05:59:59 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:07:34.305 05:59:59 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 71156 ]] 00:07:34.305 Process with pid 71156 is not found 00:07:34.305 Process with pid 71179 is not found 00:07:34.305 05:59:59 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 71156 00:07:34.305 05:59:59 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 71156 ']' 00:07:34.305 05:59:59 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 71156 00:07:34.305 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (71156) - No such process 00:07:34.305 05:59:59 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 71156 is not found' 00:07:34.305 05:59:59 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 71179 ]] 00:07:34.305 05:59:59 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 71179 00:07:34.305 05:59:59 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 71179 ']' 00:07:34.305 05:59:59 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 71179 00:07:34.305 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (71179) - No such process 00:07:34.305 05:59:59 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 71179 is not found' 00:07:34.305 05:59:59 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:34.305 00:07:34.305 real 0m15.217s 00:07:34.305 user 0m29.366s 00:07:34.305 sys 0m4.399s 00:07:34.305 05:59:59 event.cpu_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:34.305 05:59:59 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:34.305 ************************************ 00:07:34.305 END TEST cpu_locks 00:07:34.305 ************************************ 00:07:34.305 ************************************ 00:07:34.305 END TEST event 00:07:34.305 ************************************ 00:07:34.305 00:07:34.305 real 0m42.495s 00:07:34.305 user 1m26.260s 00:07:34.305 sys 0m7.579s 00:07:34.305 05:59:59 event -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:34.305 05:59:59 event -- common/autotest_common.sh@10 -- # set +x 00:07:34.305 05:59:59 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:07:34.305 05:59:59 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:34.305 05:59:59 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:34.305 05:59:59 -- common/autotest_common.sh@10 -- # set +x 00:07:34.305 ************************************ 00:07:34.305 START TEST thread 00:07:34.305 ************************************ 00:07:34.305 05:59:59 thread -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:07:34.305 * Looking for test storage... 00:07:34.305 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:07:34.305 05:59:59 thread -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:34.305 05:59:59 thread -- common/autotest_common.sh@1681 -- # lcov --version 00:07:34.305 05:59:59 thread -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:34.305 05:59:59 thread -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:34.305 05:59:59 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:34.305 05:59:59 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:34.305 05:59:59 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:34.305 05:59:59 thread -- scripts/common.sh@336 -- # IFS=.-: 00:07:34.305 05:59:59 thread -- scripts/common.sh@336 -- # read -ra ver1 00:07:34.305 05:59:59 thread -- scripts/common.sh@337 -- # IFS=.-: 00:07:34.305 05:59:59 thread -- scripts/common.sh@337 -- # read -ra ver2 00:07:34.305 05:59:59 thread -- scripts/common.sh@338 -- # local 'op=<' 00:07:34.305 05:59:59 thread -- scripts/common.sh@340 -- # ver1_l=2 00:07:34.305 05:59:59 thread -- scripts/common.sh@341 -- # ver2_l=1 00:07:34.305 05:59:59 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:34.305 05:59:59 thread -- scripts/common.sh@344 -- # case "$op" in 00:07:34.305 05:59:59 thread -- scripts/common.sh@345 -- # : 1 00:07:34.564 05:59:59 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:34.564 05:59:59 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:34.564 05:59:59 thread -- scripts/common.sh@365 -- # decimal 1 00:07:34.564 05:59:59 thread -- scripts/common.sh@353 -- # local d=1 00:07:34.564 05:59:59 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:34.564 05:59:59 thread -- scripts/common.sh@355 -- # echo 1 00:07:34.564 05:59:59 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:07:34.564 05:59:59 thread -- scripts/common.sh@366 -- # decimal 2 00:07:34.564 05:59:59 thread -- scripts/common.sh@353 -- # local d=2 00:07:34.564 05:59:59 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:34.564 05:59:59 thread -- scripts/common.sh@355 -- # echo 2 00:07:34.564 05:59:59 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:07:34.564 05:59:59 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:34.564 05:59:59 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:34.564 05:59:59 thread -- scripts/common.sh@368 -- # return 0 00:07:34.564 05:59:59 thread -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:34.564 05:59:59 thread -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:34.564 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:34.564 --rc genhtml_branch_coverage=1 00:07:34.564 --rc genhtml_function_coverage=1 00:07:34.564 --rc genhtml_legend=1 00:07:34.564 --rc geninfo_all_blocks=1 00:07:34.564 --rc geninfo_unexecuted_blocks=1 00:07:34.564 00:07:34.564 ' 00:07:34.564 05:59:59 thread -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:34.564 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:34.564 --rc genhtml_branch_coverage=1 00:07:34.564 --rc genhtml_function_coverage=1 00:07:34.564 --rc genhtml_legend=1 00:07:34.564 --rc geninfo_all_blocks=1 00:07:34.564 --rc geninfo_unexecuted_blocks=1 00:07:34.564 00:07:34.564 ' 00:07:34.564 05:59:59 thread -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:34.564 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:34.564 --rc genhtml_branch_coverage=1 00:07:34.564 --rc genhtml_function_coverage=1 00:07:34.564 --rc genhtml_legend=1 00:07:34.564 --rc geninfo_all_blocks=1 00:07:34.564 --rc geninfo_unexecuted_blocks=1 00:07:34.564 00:07:34.564 ' 00:07:34.564 05:59:59 thread -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:34.564 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:34.564 --rc genhtml_branch_coverage=1 00:07:34.564 --rc genhtml_function_coverage=1 00:07:34.564 --rc genhtml_legend=1 00:07:34.564 --rc geninfo_all_blocks=1 00:07:34.564 --rc geninfo_unexecuted_blocks=1 00:07:34.564 00:07:34.564 ' 00:07:34.564 05:59:59 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:34.564 05:59:59 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:07:34.564 05:59:59 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:34.564 05:59:59 thread -- common/autotest_common.sh@10 -- # set +x 00:07:34.564 ************************************ 00:07:34.564 START TEST thread_poller_perf 00:07:34.564 ************************************ 00:07:34.564 05:59:59 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:34.564 [2024-10-01 05:59:59.965129] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:07:34.564 [2024-10-01 05:59:59.965549] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71309 ] 00:07:34.564 [2024-10-01 06:00:00.100391] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:34.564 [2024-10-01 06:00:00.142660] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:34.564 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:07:35.940 ====================================== 00:07:35.940 busy:2209319420 (cyc) 00:07:35.940 total_run_count: 327000 00:07:35.940 tsc_hz: 2200000000 (cyc) 00:07:35.940 ====================================== 00:07:35.940 poller_cost: 6756 (cyc), 3070 (nsec) 00:07:35.940 00:07:35.940 real 0m1.261s 00:07:35.940 user 0m1.108s 00:07:35.940 sys 0m0.043s 00:07:35.940 ************************************ 00:07:35.940 END TEST thread_poller_perf 00:07:35.940 ************************************ 00:07:35.940 06:00:01 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:35.940 06:00:01 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:35.940 06:00:01 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:35.940 06:00:01 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:07:35.940 06:00:01 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:35.940 06:00:01 thread -- common/autotest_common.sh@10 -- # set +x 00:07:35.940 ************************************ 00:07:35.940 START TEST thread_poller_perf 00:07:35.940 ************************************ 00:07:35.940 06:00:01 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:35.940 [2024-10-01 06:00:01.284103] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:07:35.940 [2024-10-01 06:00:01.284551] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71339 ] 00:07:35.940 [2024-10-01 06:00:01.417041] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:35.940 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:07:35.940 [2024-10-01 06:00:01.460839] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.318 ====================================== 00:07:37.318 busy:2202202068 (cyc) 00:07:37.318 total_run_count: 4202000 00:07:37.318 tsc_hz: 2200000000 (cyc) 00:07:37.318 ====================================== 00:07:37.318 poller_cost: 524 (cyc), 238 (nsec) 00:07:37.318 00:07:37.318 real 0m1.246s 00:07:37.318 user 0m1.090s 00:07:37.318 sys 0m0.047s 00:07:37.318 06:00:02 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:37.318 ************************************ 00:07:37.318 END TEST thread_poller_perf 00:07:37.318 ************************************ 00:07:37.318 06:00:02 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:37.318 06:00:02 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:07:37.318 ************************************ 00:07:37.318 END TEST thread 00:07:37.318 ************************************ 00:07:37.318 00:07:37.318 real 0m2.819s 00:07:37.318 user 0m2.342s 00:07:37.318 sys 0m0.250s 00:07:37.318 06:00:02 thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:37.318 06:00:02 thread -- common/autotest_common.sh@10 -- # set +x 00:07:37.318 06:00:02 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:07:37.318 06:00:02 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:37.318 06:00:02 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:37.318 06:00:02 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:37.318 06:00:02 -- common/autotest_common.sh@10 -- # set +x 00:07:37.318 ************************************ 00:07:37.318 START TEST app_cmdline 00:07:37.318 ************************************ 00:07:37.318 06:00:02 app_cmdline -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:37.318 * Looking for test storage... 00:07:37.318 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:37.318 06:00:02 app_cmdline -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:37.318 06:00:02 app_cmdline -- common/autotest_common.sh@1681 -- # lcov --version 00:07:37.318 06:00:02 app_cmdline -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:37.318 06:00:02 app_cmdline -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:37.318 06:00:02 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:37.318 06:00:02 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:37.318 06:00:02 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:37.318 06:00:02 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:07:37.318 06:00:02 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:07:37.318 06:00:02 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:07:37.318 06:00:02 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:07:37.318 06:00:02 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:07:37.318 06:00:02 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:07:37.318 06:00:02 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:07:37.318 06:00:02 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:37.318 06:00:02 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:07:37.318 06:00:02 app_cmdline -- scripts/common.sh@345 -- # : 1 00:07:37.318 06:00:02 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:37.318 06:00:02 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:37.318 06:00:02 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:07:37.318 06:00:02 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:07:37.318 06:00:02 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:37.318 06:00:02 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:07:37.318 06:00:02 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:07:37.318 06:00:02 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:07:37.318 06:00:02 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:07:37.318 06:00:02 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:37.318 06:00:02 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:07:37.318 06:00:02 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:07:37.318 06:00:02 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:37.318 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:37.318 06:00:02 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:37.318 06:00:02 app_cmdline -- scripts/common.sh@368 -- # return 0 00:07:37.318 06:00:02 app_cmdline -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:37.318 06:00:02 app_cmdline -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:37.318 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:37.318 --rc genhtml_branch_coverage=1 00:07:37.318 --rc genhtml_function_coverage=1 00:07:37.318 --rc genhtml_legend=1 00:07:37.318 --rc geninfo_all_blocks=1 00:07:37.318 --rc geninfo_unexecuted_blocks=1 00:07:37.318 00:07:37.318 ' 00:07:37.318 06:00:02 app_cmdline -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:37.318 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:37.318 --rc genhtml_branch_coverage=1 00:07:37.318 --rc genhtml_function_coverage=1 00:07:37.318 --rc genhtml_legend=1 00:07:37.318 --rc geninfo_all_blocks=1 00:07:37.318 --rc geninfo_unexecuted_blocks=1 00:07:37.318 00:07:37.318 ' 00:07:37.318 06:00:02 app_cmdline -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:37.318 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:37.318 --rc genhtml_branch_coverage=1 00:07:37.319 --rc genhtml_function_coverage=1 00:07:37.319 --rc genhtml_legend=1 00:07:37.319 --rc geninfo_all_blocks=1 00:07:37.319 --rc geninfo_unexecuted_blocks=1 00:07:37.319 00:07:37.319 ' 00:07:37.319 06:00:02 app_cmdline -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:37.319 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:37.319 --rc genhtml_branch_coverage=1 00:07:37.319 --rc genhtml_function_coverage=1 00:07:37.319 --rc genhtml_legend=1 00:07:37.319 --rc geninfo_all_blocks=1 00:07:37.319 --rc geninfo_unexecuted_blocks=1 00:07:37.319 00:07:37.319 ' 00:07:37.319 06:00:02 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:37.319 06:00:02 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=71427 00:07:37.319 06:00:02 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 71427 00:07:37.319 06:00:02 app_cmdline -- common/autotest_common.sh@831 -- # '[' -z 71427 ']' 00:07:37.319 06:00:02 app_cmdline -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:37.319 06:00:02 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:37.319 06:00:02 app_cmdline -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:37.319 06:00:02 app_cmdline -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:37.319 06:00:02 app_cmdline -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:37.319 06:00:02 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:37.319 [2024-10-01 06:00:02.882787] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:07:37.319 [2024-10-01 06:00:02.883188] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71427 ] 00:07:37.578 [2024-10-01 06:00:03.024252] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:37.578 [2024-10-01 06:00:03.069352] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.578 [2024-10-01 06:00:03.112336] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:37.836 06:00:03 app_cmdline -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:37.836 06:00:03 app_cmdline -- common/autotest_common.sh@864 -- # return 0 00:07:37.836 06:00:03 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:07:38.096 { 00:07:38.096 "version": "SPDK v25.01-pre git sha1 09cc66129", 00:07:38.096 "fields": { 00:07:38.096 "major": 25, 00:07:38.096 "minor": 1, 00:07:38.096 "patch": 0, 00:07:38.096 "suffix": "-pre", 00:07:38.096 "commit": "09cc66129" 00:07:38.096 } 00:07:38.096 } 00:07:38.096 06:00:03 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:07:38.096 06:00:03 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:38.096 06:00:03 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:38.096 06:00:03 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:38.096 06:00:03 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:38.096 06:00:03 app_cmdline -- app/cmdline.sh@26 -- # sort 00:07:38.096 06:00:03 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:38.096 06:00:03 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:38.096 06:00:03 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:38.096 06:00:03 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:38.096 06:00:03 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:38.096 06:00:03 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:38.096 06:00:03 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:38.096 06:00:03 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:07:38.096 06:00:03 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:38.096 06:00:03 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:38.096 06:00:03 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:38.096 06:00:03 app_cmdline -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:38.096 06:00:03 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:38.096 06:00:03 app_cmdline -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:38.096 06:00:03 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:38.096 06:00:03 app_cmdline -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:38.096 06:00:03 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:07:38.096 06:00:03 app_cmdline -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:38.355 request: 00:07:38.355 { 00:07:38.355 "method": "env_dpdk_get_mem_stats", 00:07:38.355 "req_id": 1 00:07:38.355 } 00:07:38.355 Got JSON-RPC error response 00:07:38.355 response: 00:07:38.355 { 00:07:38.355 "code": -32601, 00:07:38.355 "message": "Method not found" 00:07:38.355 } 00:07:38.355 06:00:03 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:07:38.355 06:00:03 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:38.355 06:00:03 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:38.355 06:00:03 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:38.355 06:00:03 app_cmdline -- app/cmdline.sh@1 -- # killprocess 71427 00:07:38.355 06:00:03 app_cmdline -- common/autotest_common.sh@950 -- # '[' -z 71427 ']' 00:07:38.355 06:00:03 app_cmdline -- common/autotest_common.sh@954 -- # kill -0 71427 00:07:38.355 06:00:03 app_cmdline -- common/autotest_common.sh@955 -- # uname 00:07:38.355 06:00:03 app_cmdline -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:38.355 06:00:03 app_cmdline -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71427 00:07:38.355 killing process with pid 71427 00:07:38.355 06:00:03 app_cmdline -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:38.355 06:00:03 app_cmdline -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:38.355 06:00:03 app_cmdline -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71427' 00:07:38.355 06:00:03 app_cmdline -- common/autotest_common.sh@969 -- # kill 71427 00:07:38.355 06:00:03 app_cmdline -- common/autotest_common.sh@974 -- # wait 71427 00:07:38.617 00:07:38.617 real 0m1.608s 00:07:38.617 user 0m2.128s 00:07:38.617 sys 0m0.397s 00:07:38.617 ************************************ 00:07:38.617 END TEST app_cmdline 00:07:38.617 ************************************ 00:07:38.617 06:00:04 app_cmdline -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:38.617 06:00:04 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:38.876 06:00:04 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:38.876 06:00:04 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:38.876 06:00:04 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:38.876 06:00:04 -- common/autotest_common.sh@10 -- # set +x 00:07:38.876 ************************************ 00:07:38.876 START TEST version 00:07:38.876 ************************************ 00:07:38.876 06:00:04 version -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:38.876 * Looking for test storage... 00:07:38.876 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:38.876 06:00:04 version -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:38.876 06:00:04 version -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:38.876 06:00:04 version -- common/autotest_common.sh@1681 -- # lcov --version 00:07:38.876 06:00:04 version -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:39.137 06:00:04 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:39.137 06:00:04 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:39.137 06:00:04 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:39.137 06:00:04 version -- scripts/common.sh@336 -- # IFS=.-: 00:07:39.137 06:00:04 version -- scripts/common.sh@336 -- # read -ra ver1 00:07:39.137 06:00:04 version -- scripts/common.sh@337 -- # IFS=.-: 00:07:39.137 06:00:04 version -- scripts/common.sh@337 -- # read -ra ver2 00:07:39.137 06:00:04 version -- scripts/common.sh@338 -- # local 'op=<' 00:07:39.137 06:00:04 version -- scripts/common.sh@340 -- # ver1_l=2 00:07:39.137 06:00:04 version -- scripts/common.sh@341 -- # ver2_l=1 00:07:39.137 06:00:04 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:39.137 06:00:04 version -- scripts/common.sh@344 -- # case "$op" in 00:07:39.137 06:00:04 version -- scripts/common.sh@345 -- # : 1 00:07:39.137 06:00:04 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:39.137 06:00:04 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:39.137 06:00:04 version -- scripts/common.sh@365 -- # decimal 1 00:07:39.137 06:00:04 version -- scripts/common.sh@353 -- # local d=1 00:07:39.137 06:00:04 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:39.137 06:00:04 version -- scripts/common.sh@355 -- # echo 1 00:07:39.137 06:00:04 version -- scripts/common.sh@365 -- # ver1[v]=1 00:07:39.137 06:00:04 version -- scripts/common.sh@366 -- # decimal 2 00:07:39.137 06:00:04 version -- scripts/common.sh@353 -- # local d=2 00:07:39.137 06:00:04 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:39.137 06:00:04 version -- scripts/common.sh@355 -- # echo 2 00:07:39.137 06:00:04 version -- scripts/common.sh@366 -- # ver2[v]=2 00:07:39.137 06:00:04 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:39.137 06:00:04 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:39.137 06:00:04 version -- scripts/common.sh@368 -- # return 0 00:07:39.137 06:00:04 version -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:39.137 06:00:04 version -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:39.137 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:39.137 --rc genhtml_branch_coverage=1 00:07:39.137 --rc genhtml_function_coverage=1 00:07:39.137 --rc genhtml_legend=1 00:07:39.137 --rc geninfo_all_blocks=1 00:07:39.137 --rc geninfo_unexecuted_blocks=1 00:07:39.137 00:07:39.137 ' 00:07:39.137 06:00:04 version -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:39.137 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:39.137 --rc genhtml_branch_coverage=1 00:07:39.137 --rc genhtml_function_coverage=1 00:07:39.137 --rc genhtml_legend=1 00:07:39.137 --rc geninfo_all_blocks=1 00:07:39.137 --rc geninfo_unexecuted_blocks=1 00:07:39.137 00:07:39.137 ' 00:07:39.137 06:00:04 version -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:39.137 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:39.137 --rc genhtml_branch_coverage=1 00:07:39.137 --rc genhtml_function_coverage=1 00:07:39.137 --rc genhtml_legend=1 00:07:39.137 --rc geninfo_all_blocks=1 00:07:39.137 --rc geninfo_unexecuted_blocks=1 00:07:39.137 00:07:39.137 ' 00:07:39.137 06:00:04 version -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:39.137 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:39.137 --rc genhtml_branch_coverage=1 00:07:39.137 --rc genhtml_function_coverage=1 00:07:39.137 --rc genhtml_legend=1 00:07:39.137 --rc geninfo_all_blocks=1 00:07:39.137 --rc geninfo_unexecuted_blocks=1 00:07:39.137 00:07:39.137 ' 00:07:39.137 06:00:04 version -- app/version.sh@17 -- # get_header_version major 00:07:39.137 06:00:04 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:39.137 06:00:04 version -- app/version.sh@14 -- # cut -f2 00:07:39.137 06:00:04 version -- app/version.sh@14 -- # tr -d '"' 00:07:39.137 06:00:04 version -- app/version.sh@17 -- # major=25 00:07:39.137 06:00:04 version -- app/version.sh@18 -- # get_header_version minor 00:07:39.137 06:00:04 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:39.137 06:00:04 version -- app/version.sh@14 -- # tr -d '"' 00:07:39.137 06:00:04 version -- app/version.sh@14 -- # cut -f2 00:07:39.137 06:00:04 version -- app/version.sh@18 -- # minor=1 00:07:39.137 06:00:04 version -- app/version.sh@19 -- # get_header_version patch 00:07:39.137 06:00:04 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:39.137 06:00:04 version -- app/version.sh@14 -- # cut -f2 00:07:39.137 06:00:04 version -- app/version.sh@14 -- # tr -d '"' 00:07:39.137 06:00:04 version -- app/version.sh@19 -- # patch=0 00:07:39.137 06:00:04 version -- app/version.sh@20 -- # get_header_version suffix 00:07:39.137 06:00:04 version -- app/version.sh@14 -- # cut -f2 00:07:39.137 06:00:04 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:39.137 06:00:04 version -- app/version.sh@14 -- # tr -d '"' 00:07:39.137 06:00:04 version -- app/version.sh@20 -- # suffix=-pre 00:07:39.137 06:00:04 version -- app/version.sh@22 -- # version=25.1 00:07:39.137 06:00:04 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:39.137 06:00:04 version -- app/version.sh@28 -- # version=25.1rc0 00:07:39.137 06:00:04 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:07:39.137 06:00:04 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:39.137 06:00:04 version -- app/version.sh@30 -- # py_version=25.1rc0 00:07:39.137 06:00:04 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:07:39.137 00:07:39.137 real 0m0.271s 00:07:39.137 user 0m0.165s 00:07:39.137 sys 0m0.143s 00:07:39.137 ************************************ 00:07:39.137 END TEST version 00:07:39.137 ************************************ 00:07:39.137 06:00:04 version -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:39.137 06:00:04 version -- common/autotest_common.sh@10 -- # set +x 00:07:39.137 06:00:04 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:07:39.137 06:00:04 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:07:39.137 06:00:04 -- spdk/autotest.sh@194 -- # uname -s 00:07:39.137 06:00:04 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:07:39.137 06:00:04 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:07:39.137 06:00:04 -- spdk/autotest.sh@195 -- # [[ 1 -eq 1 ]] 00:07:39.137 06:00:04 -- spdk/autotest.sh@201 -- # [[ 0 -eq 0 ]] 00:07:39.137 06:00:04 -- spdk/autotest.sh@202 -- # run_test spdk_dd /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:07:39.137 06:00:04 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:39.137 06:00:04 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:39.137 06:00:04 -- common/autotest_common.sh@10 -- # set +x 00:07:39.137 ************************************ 00:07:39.137 START TEST spdk_dd 00:07:39.137 ************************************ 00:07:39.137 06:00:04 spdk_dd -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:07:39.137 * Looking for test storage... 00:07:39.137 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:39.137 06:00:04 spdk_dd -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:39.137 06:00:04 spdk_dd -- common/autotest_common.sh@1681 -- # lcov --version 00:07:39.137 06:00:04 spdk_dd -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:39.397 06:00:04 spdk_dd -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:39.397 06:00:04 spdk_dd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:39.397 06:00:04 spdk_dd -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:39.397 06:00:04 spdk_dd -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:39.397 06:00:04 spdk_dd -- scripts/common.sh@336 -- # IFS=.-: 00:07:39.397 06:00:04 spdk_dd -- scripts/common.sh@336 -- # read -ra ver1 00:07:39.397 06:00:04 spdk_dd -- scripts/common.sh@337 -- # IFS=.-: 00:07:39.397 06:00:04 spdk_dd -- scripts/common.sh@337 -- # read -ra ver2 00:07:39.397 06:00:04 spdk_dd -- scripts/common.sh@338 -- # local 'op=<' 00:07:39.397 06:00:04 spdk_dd -- scripts/common.sh@340 -- # ver1_l=2 00:07:39.397 06:00:04 spdk_dd -- scripts/common.sh@341 -- # ver2_l=1 00:07:39.397 06:00:04 spdk_dd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:39.397 06:00:04 spdk_dd -- scripts/common.sh@344 -- # case "$op" in 00:07:39.397 06:00:04 spdk_dd -- scripts/common.sh@345 -- # : 1 00:07:39.397 06:00:04 spdk_dd -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:39.397 06:00:04 spdk_dd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:39.397 06:00:04 spdk_dd -- scripts/common.sh@365 -- # decimal 1 00:07:39.397 06:00:04 spdk_dd -- scripts/common.sh@353 -- # local d=1 00:07:39.397 06:00:04 spdk_dd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:39.397 06:00:04 spdk_dd -- scripts/common.sh@355 -- # echo 1 00:07:39.397 06:00:04 spdk_dd -- scripts/common.sh@365 -- # ver1[v]=1 00:07:39.397 06:00:04 spdk_dd -- scripts/common.sh@366 -- # decimal 2 00:07:39.397 06:00:04 spdk_dd -- scripts/common.sh@353 -- # local d=2 00:07:39.397 06:00:04 spdk_dd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:39.397 06:00:04 spdk_dd -- scripts/common.sh@355 -- # echo 2 00:07:39.397 06:00:04 spdk_dd -- scripts/common.sh@366 -- # ver2[v]=2 00:07:39.397 06:00:04 spdk_dd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:39.397 06:00:04 spdk_dd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:39.397 06:00:04 spdk_dd -- scripts/common.sh@368 -- # return 0 00:07:39.397 06:00:04 spdk_dd -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:39.397 06:00:04 spdk_dd -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:39.397 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:39.397 --rc genhtml_branch_coverage=1 00:07:39.397 --rc genhtml_function_coverage=1 00:07:39.397 --rc genhtml_legend=1 00:07:39.397 --rc geninfo_all_blocks=1 00:07:39.397 --rc geninfo_unexecuted_blocks=1 00:07:39.397 00:07:39.397 ' 00:07:39.397 06:00:04 spdk_dd -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:39.397 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:39.397 --rc genhtml_branch_coverage=1 00:07:39.397 --rc genhtml_function_coverage=1 00:07:39.397 --rc genhtml_legend=1 00:07:39.397 --rc geninfo_all_blocks=1 00:07:39.397 --rc geninfo_unexecuted_blocks=1 00:07:39.397 00:07:39.397 ' 00:07:39.397 06:00:04 spdk_dd -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:39.397 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:39.397 --rc genhtml_branch_coverage=1 00:07:39.397 --rc genhtml_function_coverage=1 00:07:39.397 --rc genhtml_legend=1 00:07:39.397 --rc geninfo_all_blocks=1 00:07:39.397 --rc geninfo_unexecuted_blocks=1 00:07:39.397 00:07:39.397 ' 00:07:39.397 06:00:04 spdk_dd -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:39.397 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:39.397 --rc genhtml_branch_coverage=1 00:07:39.397 --rc genhtml_function_coverage=1 00:07:39.397 --rc genhtml_legend=1 00:07:39.397 --rc geninfo_all_blocks=1 00:07:39.397 --rc geninfo_unexecuted_blocks=1 00:07:39.397 00:07:39.397 ' 00:07:39.397 06:00:04 spdk_dd -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:39.397 06:00:04 spdk_dd -- scripts/common.sh@15 -- # shopt -s extglob 00:07:39.397 06:00:04 spdk_dd -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:39.397 06:00:04 spdk_dd -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:39.397 06:00:04 spdk_dd -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:39.397 06:00:04 spdk_dd -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:39.397 06:00:04 spdk_dd -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:39.397 06:00:04 spdk_dd -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:39.397 06:00:04 spdk_dd -- paths/export.sh@5 -- # export PATH 00:07:39.397 06:00:04 spdk_dd -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:39.397 06:00:04 spdk_dd -- dd/dd.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:39.657 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:39.657 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:07:39.657 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:07:39.657 06:00:05 spdk_dd -- dd/dd.sh@11 -- # nvmes=($(nvme_in_userspace)) 00:07:39.657 06:00:05 spdk_dd -- dd/dd.sh@11 -- # nvme_in_userspace 00:07:39.657 06:00:05 spdk_dd -- scripts/common.sh@312 -- # local bdf bdfs 00:07:39.657 06:00:05 spdk_dd -- scripts/common.sh@313 -- # local nvmes 00:07:39.657 06:00:05 spdk_dd -- scripts/common.sh@315 -- # [[ -n '' ]] 00:07:39.657 06:00:05 spdk_dd -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:07:39.657 06:00:05 spdk_dd -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:07:39.657 06:00:05 spdk_dd -- scripts/common.sh@298 -- # local bdf= 00:07:39.657 06:00:05 spdk_dd -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:07:39.657 06:00:05 spdk_dd -- scripts/common.sh@233 -- # local class 00:07:39.657 06:00:05 spdk_dd -- scripts/common.sh@234 -- # local subclass 00:07:39.657 06:00:05 spdk_dd -- scripts/common.sh@235 -- # local progif 00:07:39.657 06:00:05 spdk_dd -- scripts/common.sh@236 -- # printf %02x 1 00:07:39.657 06:00:05 spdk_dd -- scripts/common.sh@236 -- # class=01 00:07:39.657 06:00:05 spdk_dd -- scripts/common.sh@237 -- # printf %02x 8 00:07:39.657 06:00:05 spdk_dd -- scripts/common.sh@237 -- # subclass=08 00:07:39.657 06:00:05 spdk_dd -- scripts/common.sh@238 -- # printf %02x 2 00:07:39.657 06:00:05 spdk_dd -- scripts/common.sh@238 -- # progif=02 00:07:39.657 06:00:05 spdk_dd -- scripts/common.sh@240 -- # hash lspci 00:07:39.657 06:00:05 spdk_dd -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:07:39.657 06:00:05 spdk_dd -- scripts/common.sh@242 -- # lspci -mm -n -D 00:07:39.657 06:00:05 spdk_dd -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:07:39.657 06:00:05 spdk_dd -- scripts/common.sh@243 -- # grep -i -- -p02 00:07:39.657 06:00:05 spdk_dd -- scripts/common.sh@245 -- # tr -d '"' 00:07:39.657 06:00:05 spdk_dd -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:07:39.657 06:00:05 spdk_dd -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:07:39.657 06:00:05 spdk_dd -- scripts/common.sh@18 -- # local i 00:07:39.657 06:00:05 spdk_dd -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:07:39.657 06:00:05 spdk_dd -- scripts/common.sh@25 -- # [[ -z '' ]] 00:07:39.657 06:00:05 spdk_dd -- scripts/common.sh@27 -- # return 0 00:07:39.657 06:00:05 spdk_dd -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:07:39.657 06:00:05 spdk_dd -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:07:39.657 06:00:05 spdk_dd -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:07:39.657 06:00:05 spdk_dd -- scripts/common.sh@18 -- # local i 00:07:39.657 06:00:05 spdk_dd -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:07:39.657 06:00:05 spdk_dd -- scripts/common.sh@25 -- # [[ -z '' ]] 00:07:39.657 06:00:05 spdk_dd -- scripts/common.sh@27 -- # return 0 00:07:39.657 06:00:05 spdk_dd -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:07:39.657 06:00:05 spdk_dd -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:07:39.657 06:00:05 spdk_dd -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:07:39.657 06:00:05 spdk_dd -- scripts/common.sh@323 -- # uname -s 00:07:39.917 06:00:05 spdk_dd -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:07:39.917 06:00:05 spdk_dd -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:07:39.917 06:00:05 spdk_dd -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:07:39.917 06:00:05 spdk_dd -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:07:39.917 06:00:05 spdk_dd -- scripts/common.sh@323 -- # uname -s 00:07:39.917 06:00:05 spdk_dd -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:07:39.917 06:00:05 spdk_dd -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:07:39.917 06:00:05 spdk_dd -- scripts/common.sh@328 -- # (( 2 )) 00:07:39.917 06:00:05 spdk_dd -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:07:39.917 06:00:05 spdk_dd -- dd/dd.sh@13 -- # check_liburing 00:07:39.917 06:00:05 spdk_dd -- dd/common.sh@139 -- # local lib 00:07:39.917 06:00:05 spdk_dd -- dd/common.sh@140 -- # local -g liburing_in_use=0 00:07:39.917 06:00:05 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:39.917 06:00:05 spdk_dd -- dd/common.sh@137 -- # objdump -p /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:39.917 06:00:05 spdk_dd -- dd/common.sh@137 -- # grep NEEDED 00:07:39.917 06:00:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_malloc.so.6.0 == liburing.so.* ]] 00:07:39.917 06:00:05 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:39.917 06:00:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_null.so.6.0 == liburing.so.* ]] 00:07:39.917 06:00:05 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:39.917 06:00:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_nvme.so.7.0 == liburing.so.* ]] 00:07:39.917 06:00:05 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:39.917 06:00:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_passthru.so.6.0 == liburing.so.* ]] 00:07:39.917 06:00:05 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:39.917 06:00:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_lvol.so.6.0 == liburing.so.* ]] 00:07:39.917 06:00:05 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:39.917 06:00:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_raid.so.6.0 == liburing.so.* ]] 00:07:39.917 06:00:05 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:39.917 06:00:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_error.so.6.0 == liburing.so.* ]] 00:07:39.917 06:00:05 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:39.917 06:00:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_gpt.so.6.0 == liburing.so.* ]] 00:07:39.917 06:00:05 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:39.917 06:00:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_split.so.6.0 == liburing.so.* ]] 00:07:39.917 06:00:05 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:39.917 06:00:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_delay.so.6.0 == liburing.so.* ]] 00:07:39.917 06:00:05 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:39.917 06:00:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_zone_block.so.6.0 == liburing.so.* ]] 00:07:39.917 06:00:05 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:39.917 06:00:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blobfs_bdev.so.6.0 == liburing.so.* ]] 00:07:39.917 06:00:05 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:39.917 06:00:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blobfs.so.10.0 == liburing.so.* ]] 00:07:39.917 06:00:05 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:39.917 06:00:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blob_bdev.so.11.0 == liburing.so.* ]] 00:07:39.917 06:00:05 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:39.917 06:00:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_lvol.so.10.0 == liburing.so.* ]] 00:07:39.917 06:00:05 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:39.917 06:00:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blob.so.11.0 == liburing.so.* ]] 00:07:39.917 06:00:05 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:39.917 06:00:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_nvme.so.14.0 == liburing.so.* ]] 00:07:39.917 06:00:05 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:39.917 06:00:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rdma_provider.so.6.0 == liburing.so.* ]] 00:07:39.917 06:00:05 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:39.917 06:00:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rdma_utils.so.1.0 == liburing.so.* ]] 00:07:39.917 06:00:05 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:39.917 06:00:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_aio.so.6.0 == liburing.so.* ]] 00:07:39.917 06:00:05 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:39.917 06:00:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_ftl.so.6.0 == liburing.so.* ]] 00:07:39.917 06:00:05 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:39.917 06:00:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_ftl.so.9.0 == liburing.so.* ]] 00:07:39.917 06:00:05 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:39.917 06:00:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_virtio.so.6.0 == liburing.so.* ]] 00:07:39.917 06:00:05 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:39.917 06:00:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_virtio.so.7.0 == liburing.so.* ]] 00:07:39.917 06:00:05 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:39.917 06:00:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vfio_user.so.5.0 == liburing.so.* ]] 00:07:39.917 06:00:05 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:39.917 06:00:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_iscsi.so.6.0 == liburing.so.* ]] 00:07:39.917 06:00:05 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:39.917 06:00:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_uring.so.6.0 == liburing.so.* ]] 00:07:39.917 06:00:05 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:39.917 06:00:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_error.so.2.0 == liburing.so.* ]] 00:07:39.917 06:00:05 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:39.917 06:00:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_ioat.so.6.0 == liburing.so.* ]] 00:07:39.917 06:00:05 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:39.917 06:00:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_ioat.so.7.0 == liburing.so.* ]] 00:07:39.917 06:00:05 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:39.917 06:00:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_dsa.so.5.0 == liburing.so.* ]] 00:07:39.917 06:00:05 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:39.917 06:00:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_iaa.so.3.0 == liburing.so.* ]] 00:07:39.917 06:00:05 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:39.917 06:00:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_idxd.so.12.1 == liburing.so.* ]] 00:07:39.917 06:00:05 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:39.917 06:00:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_dynamic.so.4.0 == liburing.so.* ]] 00:07:39.917 06:00:05 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:39.917 06:00:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_env_dpdk.so.15.0 == liburing.so.* ]] 00:07:39.917 06:00:05 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:39.917 06:00:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_dpdk_governor.so.4.0 == liburing.so.* ]] 00:07:39.917 06:00:05 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:39.917 06:00:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_gscheduler.so.4.0 == liburing.so.* ]] 00:07:39.918 06:00:05 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:39.918 06:00:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock_posix.so.6.0 == liburing.so.* ]] 00:07:39.918 06:00:05 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:39.918 06:00:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock_uring.so.5.0 == liburing.so.* ]] 00:07:39.918 06:00:05 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:39.918 06:00:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring_file.so.2.0 == liburing.so.* ]] 00:07:39.918 06:00:05 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:39.918 06:00:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring_linux.so.1.0 == liburing.so.* ]] 00:07:39.918 06:00:05 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:39.918 06:00:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_fsdev_aio.so.1.0 == liburing.so.* ]] 00:07:39.918 06:00:05 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:39.918 06:00:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_fsdev.so.1.0 == liburing.so.* ]] 00:07:39.918 06:00:05 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:39.918 06:00:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event.so.14.0 == liburing.so.* ]] 00:07:39.918 06:00:05 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:39.918 06:00:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_bdev.so.6.0 == liburing.so.* ]] 00:07:39.918 06:00:05 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:39.918 06:00:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev.so.16.0 == liburing.so.* ]] 00:07:39.918 06:00:05 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:39.918 06:00:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_notify.so.6.0 == liburing.so.* ]] 00:07:39.918 06:00:05 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:39.918 06:00:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_accel.so.6.0 == liburing.so.* ]] 00:07:39.918 06:00:05 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:39.918 06:00:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel.so.16.0 == liburing.so.* ]] 00:07:39.918 06:00:05 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:39.918 06:00:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_dma.so.5.0 == liburing.so.* ]] 00:07:39.918 06:00:05 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:39.918 06:00:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_vmd.so.6.0 == liburing.so.* ]] 00:07:39.918 06:00:05 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:39.918 06:00:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vmd.so.6.0 == liburing.so.* ]] 00:07:39.918 06:00:05 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:39.918 06:00:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_sock.so.5.0 == liburing.so.* ]] 00:07:39.918 06:00:05 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:39.918 06:00:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock.so.10.0 == liburing.so.* ]] 00:07:39.918 06:00:05 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:39.918 06:00:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_iobuf.so.3.0 == liburing.so.* ]] 00:07:39.918 06:00:05 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:39.918 06:00:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_keyring.so.1.0 == liburing.so.* ]] 00:07:39.918 06:00:05 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:39.918 06:00:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_init.so.6.0 == liburing.so.* ]] 00:07:39.918 06:00:05 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:39.918 06:00:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_thread.so.10.1 == liburing.so.* ]] 00:07:39.918 06:00:05 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:39.918 06:00:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_trace.so.11.0 == liburing.so.* ]] 00:07:39.918 06:00:05 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:39.918 06:00:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring.so.2.0 == liburing.so.* ]] 00:07:39.918 06:00:05 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:39.918 06:00:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rpc.so.6.0 == liburing.so.* ]] 00:07:39.918 06:00:05 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:39.918 06:00:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_jsonrpc.so.6.0 == liburing.so.* ]] 00:07:39.918 06:00:05 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:39.918 06:00:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_json.so.6.0 == liburing.so.* ]] 00:07:39.918 06:00:05 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:39.918 06:00:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_util.so.10.0 == liburing.so.* ]] 00:07:39.918 06:00:05 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:39.918 06:00:05 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_log.so.7.0 == liburing.so.* ]] 00:07:39.918 06:00:05 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:39.918 06:00:05 spdk_dd -- dd/common.sh@143 -- # [[ librte_bus_pci.so.23 == liburing.so.* ]] 00:07:39.918 06:00:05 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:39.918 06:00:05 spdk_dd -- dd/common.sh@143 -- # [[ librte_cryptodev.so.23 == liburing.so.* ]] 00:07:39.918 06:00:05 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:39.918 06:00:05 spdk_dd -- dd/common.sh@143 -- # [[ librte_dmadev.so.23 == liburing.so.* ]] 00:07:39.918 06:00:05 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:39.918 06:00:05 spdk_dd -- dd/common.sh@143 -- # [[ librte_eal.so.23 == liburing.so.* ]] 00:07:39.918 06:00:05 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:39.918 06:00:05 spdk_dd -- dd/common.sh@143 -- # [[ librte_ethdev.so.23 == liburing.so.* ]] 00:07:39.918 06:00:05 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:39.918 06:00:05 spdk_dd -- dd/common.sh@143 -- # [[ librte_hash.so.23 == liburing.so.* ]] 00:07:39.918 06:00:05 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:39.918 06:00:05 spdk_dd -- dd/common.sh@143 -- # [[ librte_kvargs.so.23 == liburing.so.* ]] 00:07:39.918 06:00:05 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:39.918 06:00:05 spdk_dd -- dd/common.sh@143 -- # [[ librte_mbuf.so.23 == liburing.so.* ]] 00:07:39.918 06:00:05 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:39.918 06:00:05 spdk_dd -- dd/common.sh@143 -- # [[ librte_mempool.so.23 == liburing.so.* ]] 00:07:39.918 06:00:05 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:39.918 06:00:05 spdk_dd -- dd/common.sh@143 -- # [[ librte_mempool_ring.so.23 == liburing.so.* ]] 00:07:39.918 06:00:05 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:39.918 06:00:05 spdk_dd -- dd/common.sh@143 -- # [[ librte_net.so.23 == liburing.so.* ]] 00:07:39.918 06:00:05 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:39.918 06:00:05 spdk_dd -- dd/common.sh@143 -- # [[ librte_pci.so.23 == liburing.so.* ]] 00:07:39.918 06:00:05 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:39.918 06:00:05 spdk_dd -- dd/common.sh@143 -- # [[ librte_power.so.23 == liburing.so.* ]] 00:07:39.918 06:00:05 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:39.918 06:00:05 spdk_dd -- dd/common.sh@143 -- # [[ librte_rcu.so.23 == liburing.so.* ]] 00:07:39.918 06:00:05 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:39.918 06:00:05 spdk_dd -- dd/common.sh@143 -- # [[ librte_ring.so.23 == liburing.so.* ]] 00:07:39.918 06:00:05 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:39.918 06:00:05 spdk_dd -- dd/common.sh@143 -- # [[ librte_telemetry.so.23 == liburing.so.* ]] 00:07:39.918 06:00:05 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:39.918 06:00:05 spdk_dd -- dd/common.sh@143 -- # [[ librte_vhost.so.23 == liburing.so.* ]] 00:07:39.918 06:00:05 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:39.918 06:00:05 spdk_dd -- dd/common.sh@143 -- # [[ liburing.so.2 == liburing.so.* ]] 00:07:39.918 06:00:05 spdk_dd -- dd/common.sh@144 -- # printf '* spdk_dd linked to liburing\n' 00:07:39.918 * spdk_dd linked to liburing 00:07:39.918 06:00:05 spdk_dd -- dd/common.sh@146 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:07:39.918 06:00:05 spdk_dd -- dd/common.sh@147 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:07:39.918 06:00:05 spdk_dd -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:07:39.918 06:00:05 spdk_dd -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:07:39.918 06:00:05 spdk_dd -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:07:39.918 06:00:05 spdk_dd -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:07:39.918 06:00:05 spdk_dd -- common/build_config.sh@5 -- # CONFIG_USDT=y 00:07:39.918 06:00:05 spdk_dd -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:07:39.918 06:00:05 spdk_dd -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:07:39.918 06:00:05 spdk_dd -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:07:39.918 06:00:05 spdk_dd -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:07:39.918 06:00:05 spdk_dd -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:07:39.918 06:00:05 spdk_dd -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:07:39.918 06:00:05 spdk_dd -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:07:39.918 06:00:05 spdk_dd -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:07:39.918 06:00:05 spdk_dd -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:07:39.918 06:00:05 spdk_dd -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:07:39.918 06:00:05 spdk_dd -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:07:39.918 06:00:05 spdk_dd -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:07:39.918 06:00:05 spdk_dd -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:07:39.918 06:00:05 spdk_dd -- common/build_config.sh@19 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:07:39.918 06:00:05 spdk_dd -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:07:39.918 06:00:05 spdk_dd -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:07:39.918 06:00:05 spdk_dd -- common/build_config.sh@22 -- # CONFIG_CET=n 00:07:39.918 06:00:05 spdk_dd -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:07:39.918 06:00:05 spdk_dd -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:07:39.918 06:00:05 spdk_dd -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:07:39.918 06:00:05 spdk_dd -- common/build_config.sh@26 -- # CONFIG_AIO_FSDEV=y 00:07:39.918 06:00:05 spdk_dd -- common/build_config.sh@27 -- # CONFIG_HAVE_ARC4RANDOM=y 00:07:39.918 06:00:05 spdk_dd -- common/build_config.sh@28 -- # CONFIG_HAVE_LIBARCHIVE=n 00:07:39.918 06:00:05 spdk_dd -- common/build_config.sh@29 -- # CONFIG_UBLK=y 00:07:39.918 06:00:05 spdk_dd -- common/build_config.sh@30 -- # CONFIG_ISAL_CRYPTO=y 00:07:39.918 06:00:05 spdk_dd -- common/build_config.sh@31 -- # CONFIG_OPENSSL_PATH= 00:07:39.918 06:00:05 spdk_dd -- common/build_config.sh@32 -- # CONFIG_OCF=n 00:07:39.918 06:00:05 spdk_dd -- common/build_config.sh@33 -- # CONFIG_FUSE=n 00:07:39.918 06:00:05 spdk_dd -- common/build_config.sh@34 -- # CONFIG_VTUNE_DIR= 00:07:39.918 06:00:05 spdk_dd -- common/build_config.sh@35 -- # CONFIG_FUZZER_LIB= 00:07:39.918 06:00:05 spdk_dd -- common/build_config.sh@36 -- # CONFIG_FUZZER=n 00:07:39.918 06:00:05 spdk_dd -- common/build_config.sh@37 -- # CONFIG_FSDEV=y 00:07:39.918 06:00:05 spdk_dd -- common/build_config.sh@38 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/dpdk/build 00:07:39.918 06:00:05 spdk_dd -- common/build_config.sh@39 -- # CONFIG_CRYPTO=n 00:07:39.918 06:00:05 spdk_dd -- common/build_config.sh@40 -- # CONFIG_PGO_USE=n 00:07:39.918 06:00:05 spdk_dd -- common/build_config.sh@41 -- # CONFIG_VHOST=y 00:07:39.918 06:00:05 spdk_dd -- common/build_config.sh@42 -- # CONFIG_DAOS=n 00:07:39.919 06:00:05 spdk_dd -- common/build_config.sh@43 -- # CONFIG_DPDK_INC_DIR=//home/vagrant/spdk_repo/dpdk/build/include 00:07:39.919 06:00:05 spdk_dd -- common/build_config.sh@44 -- # CONFIG_DAOS_DIR= 00:07:39.919 06:00:05 spdk_dd -- common/build_config.sh@45 -- # CONFIG_UNIT_TESTS=n 00:07:39.919 06:00:05 spdk_dd -- common/build_config.sh@46 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:07:39.919 06:00:05 spdk_dd -- common/build_config.sh@47 -- # CONFIG_VIRTIO=y 00:07:39.919 06:00:05 spdk_dd -- common/build_config.sh@48 -- # CONFIG_DPDK_UADK=n 00:07:39.919 06:00:05 spdk_dd -- common/build_config.sh@49 -- # CONFIG_COVERAGE=y 00:07:39.919 06:00:05 spdk_dd -- common/build_config.sh@50 -- # CONFIG_RDMA=y 00:07:39.919 06:00:05 spdk_dd -- common/build_config.sh@51 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:07:39.919 06:00:05 spdk_dd -- common/build_config.sh@52 -- # CONFIG_HAVE_LZ4=n 00:07:39.919 06:00:05 spdk_dd -- common/build_config.sh@53 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:07:39.919 06:00:05 spdk_dd -- common/build_config.sh@54 -- # CONFIG_URING_PATH= 00:07:39.919 06:00:05 spdk_dd -- common/build_config.sh@55 -- # CONFIG_XNVME=n 00:07:39.919 06:00:05 spdk_dd -- common/build_config.sh@56 -- # CONFIG_VFIO_USER=n 00:07:39.919 06:00:05 spdk_dd -- common/build_config.sh@57 -- # CONFIG_ARCH=native 00:07:39.919 06:00:05 spdk_dd -- common/build_config.sh@58 -- # CONFIG_HAVE_EVP_MAC=y 00:07:39.919 06:00:05 spdk_dd -- common/build_config.sh@59 -- # CONFIG_URING_ZNS=y 00:07:39.919 06:00:05 spdk_dd -- common/build_config.sh@60 -- # CONFIG_WERROR=y 00:07:39.919 06:00:05 spdk_dd -- common/build_config.sh@61 -- # CONFIG_HAVE_LIBBSD=n 00:07:39.919 06:00:05 spdk_dd -- common/build_config.sh@62 -- # CONFIG_UBSAN=y 00:07:39.919 06:00:05 spdk_dd -- common/build_config.sh@63 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:07:39.919 06:00:05 spdk_dd -- common/build_config.sh@64 -- # CONFIG_IPSEC_MB_DIR= 00:07:39.919 06:00:05 spdk_dd -- common/build_config.sh@65 -- # CONFIG_GOLANG=n 00:07:39.919 06:00:05 spdk_dd -- common/build_config.sh@66 -- # CONFIG_ISAL=y 00:07:39.919 06:00:05 spdk_dd -- common/build_config.sh@67 -- # CONFIG_IDXD_KERNEL=y 00:07:39.919 06:00:05 spdk_dd -- common/build_config.sh@68 -- # CONFIG_DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:07:39.919 06:00:05 spdk_dd -- common/build_config.sh@69 -- # CONFIG_RDMA_PROV=verbs 00:07:39.919 06:00:05 spdk_dd -- common/build_config.sh@70 -- # CONFIG_APPS=y 00:07:39.919 06:00:05 spdk_dd -- common/build_config.sh@71 -- # CONFIG_SHARED=y 00:07:39.919 06:00:05 spdk_dd -- common/build_config.sh@72 -- # CONFIG_HAVE_KEYUTILS=y 00:07:39.919 06:00:05 spdk_dd -- common/build_config.sh@73 -- # CONFIG_FC_PATH= 00:07:39.919 06:00:05 spdk_dd -- common/build_config.sh@74 -- # CONFIG_DPDK_PKG_CONFIG=n 00:07:39.919 06:00:05 spdk_dd -- common/build_config.sh@75 -- # CONFIG_FC=n 00:07:39.919 06:00:05 spdk_dd -- common/build_config.sh@76 -- # CONFIG_AVAHI=n 00:07:39.919 06:00:05 spdk_dd -- common/build_config.sh@77 -- # CONFIG_FIO_PLUGIN=y 00:07:39.919 06:00:05 spdk_dd -- common/build_config.sh@78 -- # CONFIG_RAID5F=n 00:07:39.919 06:00:05 spdk_dd -- common/build_config.sh@79 -- # CONFIG_EXAMPLES=y 00:07:39.919 06:00:05 spdk_dd -- common/build_config.sh@80 -- # CONFIG_TESTS=y 00:07:39.919 06:00:05 spdk_dd -- common/build_config.sh@81 -- # CONFIG_CRYPTO_MLX5=n 00:07:39.919 06:00:05 spdk_dd -- common/build_config.sh@82 -- # CONFIG_MAX_LCORES=128 00:07:39.919 06:00:05 spdk_dd -- common/build_config.sh@83 -- # CONFIG_IPSEC_MB=n 00:07:39.919 06:00:05 spdk_dd -- common/build_config.sh@84 -- # CONFIG_PGO_DIR= 00:07:39.919 06:00:05 spdk_dd -- common/build_config.sh@85 -- # CONFIG_DEBUG=y 00:07:39.919 06:00:05 spdk_dd -- common/build_config.sh@86 -- # CONFIG_DPDK_COMPRESSDEV=n 00:07:39.919 06:00:05 spdk_dd -- common/build_config.sh@87 -- # CONFIG_CROSS_PREFIX= 00:07:39.919 06:00:05 spdk_dd -- common/build_config.sh@88 -- # CONFIG_COPY_FILE_RANGE=y 00:07:39.919 06:00:05 spdk_dd -- common/build_config.sh@89 -- # CONFIG_URING=y 00:07:39.919 06:00:05 spdk_dd -- dd/common.sh@149 -- # [[ y != y ]] 00:07:39.919 06:00:05 spdk_dd -- dd/common.sh@152 -- # export liburing_in_use=1 00:07:39.919 06:00:05 spdk_dd -- dd/common.sh@152 -- # liburing_in_use=1 00:07:39.919 06:00:05 spdk_dd -- dd/common.sh@153 -- # return 0 00:07:39.919 06:00:05 spdk_dd -- dd/dd.sh@15 -- # (( liburing_in_use == 0 && SPDK_TEST_URING == 1 )) 00:07:39.919 06:00:05 spdk_dd -- dd/dd.sh@20 -- # run_test spdk_dd_basic_rw /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 0000:00:11.0 00:07:39.919 06:00:05 spdk_dd -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:07:39.919 06:00:05 spdk_dd -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:39.919 06:00:05 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:07:39.919 ************************************ 00:07:39.919 START TEST spdk_dd_basic_rw 00:07:39.919 ************************************ 00:07:39.919 06:00:05 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 0000:00:11.0 00:07:39.919 * Looking for test storage... 00:07:39.919 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:39.919 06:00:05 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:39.919 06:00:05 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1681 -- # lcov --version 00:07:39.919 06:00:05 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:39.919 06:00:05 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:39.919 06:00:05 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:39.919 06:00:05 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:39.919 06:00:05 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:39.919 06:00:05 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@336 -- # IFS=.-: 00:07:39.919 06:00:05 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@336 -- # read -ra ver1 00:07:39.919 06:00:05 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@337 -- # IFS=.-: 00:07:39.919 06:00:05 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@337 -- # read -ra ver2 00:07:39.919 06:00:05 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@338 -- # local 'op=<' 00:07:39.919 06:00:05 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@340 -- # ver1_l=2 00:07:39.919 06:00:05 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@341 -- # ver2_l=1 00:07:39.919 06:00:05 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:39.919 06:00:05 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@344 -- # case "$op" in 00:07:39.919 06:00:05 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@345 -- # : 1 00:07:39.919 06:00:05 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:39.919 06:00:05 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:40.180 06:00:05 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@365 -- # decimal 1 00:07:40.180 06:00:05 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@353 -- # local d=1 00:07:40.180 06:00:05 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:40.180 06:00:05 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@355 -- # echo 1 00:07:40.180 06:00:05 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@365 -- # ver1[v]=1 00:07:40.180 06:00:05 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@366 -- # decimal 2 00:07:40.180 06:00:05 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@353 -- # local d=2 00:07:40.180 06:00:05 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:40.180 06:00:05 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@355 -- # echo 2 00:07:40.180 06:00:05 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@366 -- # ver2[v]=2 00:07:40.180 06:00:05 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:40.180 06:00:05 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:40.180 06:00:05 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@368 -- # return 0 00:07:40.180 06:00:05 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:40.180 06:00:05 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:40.180 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:40.180 --rc genhtml_branch_coverage=1 00:07:40.180 --rc genhtml_function_coverage=1 00:07:40.180 --rc genhtml_legend=1 00:07:40.180 --rc geninfo_all_blocks=1 00:07:40.180 --rc geninfo_unexecuted_blocks=1 00:07:40.180 00:07:40.180 ' 00:07:40.180 06:00:05 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:40.180 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:40.180 --rc genhtml_branch_coverage=1 00:07:40.180 --rc genhtml_function_coverage=1 00:07:40.180 --rc genhtml_legend=1 00:07:40.180 --rc geninfo_all_blocks=1 00:07:40.180 --rc geninfo_unexecuted_blocks=1 00:07:40.180 00:07:40.180 ' 00:07:40.180 06:00:05 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:40.180 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:40.180 --rc genhtml_branch_coverage=1 00:07:40.180 --rc genhtml_function_coverage=1 00:07:40.180 --rc genhtml_legend=1 00:07:40.180 --rc geninfo_all_blocks=1 00:07:40.180 --rc geninfo_unexecuted_blocks=1 00:07:40.180 00:07:40.180 ' 00:07:40.180 06:00:05 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:40.180 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:40.180 --rc genhtml_branch_coverage=1 00:07:40.180 --rc genhtml_function_coverage=1 00:07:40.180 --rc genhtml_legend=1 00:07:40.180 --rc geninfo_all_blocks=1 00:07:40.180 --rc geninfo_unexecuted_blocks=1 00:07:40.180 00:07:40.180 ' 00:07:40.180 06:00:05 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:40.180 06:00:05 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@15 -- # shopt -s extglob 00:07:40.180 06:00:05 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:40.180 06:00:05 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:40.180 06:00:05 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:40.180 06:00:05 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:40.180 06:00:05 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:40.180 06:00:05 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:40.180 06:00:05 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@5 -- # export PATH 00:07:40.180 06:00:05 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:40.180 06:00:05 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@80 -- # trap cleanup EXIT 00:07:40.180 06:00:05 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@82 -- # nvmes=("$@") 00:07:40.180 06:00:05 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0=Nvme0 00:07:40.180 06:00:05 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0_pci=0000:00:10.0 00:07:40.180 06:00:05 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # bdev0=Nvme0n1 00:07:40.180 06:00:05 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:07:40.180 06:00:05 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # declare -A method_bdev_nvme_attach_controller_0 00:07:40.180 06:00:05 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@91 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:40.180 06:00:05 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@92 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:40.180 06:00:05 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # get_native_nvme_bs 0000:00:10.0 00:07:40.180 06:00:05 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@124 -- # local pci=0000:00:10.0 lbaf id 00:07:40.180 06:00:05 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # mapfile -t id 00:07:40.180 06:00:05 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:pcie traddr:0000:00:10.0' 00:07:40.181 06:00:05 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@129 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 22 Data Units Written: 3 Host Read Commands: 496 Host Write Commands: 2 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 NVM Specific Namespace Data =========================== Logical Block Storage Tag Mask: 0 Protection Information Capabilities: 16b Guard Protection Information Storage Tag Support: No 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 Storage Tag Check Read Support: No Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI =~ Current LBA Format: *LBA Format #([0-9]+) ]] 00:07:40.181 06:00:05 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@130 -- # lbaf=04 00:07:40.182 06:00:05 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@131 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 22 Data Units Written: 3 Host Read Commands: 496 Host Write Commands: 2 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 NVM Specific Namespace Data =========================== Logical Block Storage Tag Mask: 0 Protection Information Capabilities: 16b Guard Protection Information Storage Tag Support: No 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 Storage Tag Check Read Support: No Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI =~ LBA Format #04: Data Size: *([0-9]+) ]] 00:07:40.182 06:00:05 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@132 -- # lbaf=4096 00:07:40.182 06:00:05 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@134 -- # echo 4096 00:07:40.182 06:00:05 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # native_bs=4096 00:07:40.182 06:00:05 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # : 00:07:40.182 06:00:05 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # run_test dd_bs_lt_native_bs NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:07:40.182 06:00:05 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # gen_conf 00:07:40.182 06:00:05 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:07:40.182 06:00:05 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:40.182 06:00:05 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:40.182 06:00:05 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:07:40.182 06:00:05 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:07:40.182 ************************************ 00:07:40.182 START TEST dd_bs_lt_native_bs 00:07:40.182 ************************************ 00:07:40.182 06:00:05 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1125 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:07:40.182 06:00:05 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@650 -- # local es=0 00:07:40.182 06:00:05 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:07:40.182 06:00:05 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:40.182 06:00:05 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:40.182 06:00:05 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:40.182 06:00:05 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:40.182 06:00:05 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:40.182 06:00:05 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:40.182 06:00:05 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:40.182 06:00:05 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:40.182 06:00:05 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:07:40.442 { 00:07:40.442 "subsystems": [ 00:07:40.442 { 00:07:40.442 "subsystem": "bdev", 00:07:40.442 "config": [ 00:07:40.442 { 00:07:40.442 "params": { 00:07:40.442 "trtype": "pcie", 00:07:40.442 "traddr": "0000:00:10.0", 00:07:40.442 "name": "Nvme0" 00:07:40.442 }, 00:07:40.442 "method": "bdev_nvme_attach_controller" 00:07:40.442 }, 00:07:40.442 { 00:07:40.442 "method": "bdev_wait_for_examine" 00:07:40.442 } 00:07:40.442 ] 00:07:40.442 } 00:07:40.442 ] 00:07:40.442 } 00:07:40.442 [2024-10-01 06:00:05.813440] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:07:40.442 [2024-10-01 06:00:05.813806] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71766 ] 00:07:40.442 [2024-10-01 06:00:05.954478] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:40.442 [2024-10-01 06:00:05.998414] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:40.442 [2024-10-01 06:00:06.033707] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:40.702 [2024-10-01 06:00:06.129011] spdk_dd.c:1161:dd_run: *ERROR*: --bs value cannot be less than input (1) neither output (4096) native block size 00:07:40.702 [2024-10-01 06:00:06.129140] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:40.702 [2024-10-01 06:00:06.208016] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:40.702 06:00:06 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@653 -- # es=234 00:07:40.702 06:00:06 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:40.702 06:00:06 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@662 -- # es=106 00:07:40.702 06:00:06 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@663 -- # case "$es" in 00:07:40.702 06:00:06 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@670 -- # es=1 00:07:40.702 06:00:06 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:40.702 00:07:40.702 real 0m0.533s 00:07:40.702 user 0m0.364s 00:07:40.702 sys 0m0.126s 00:07:40.702 ************************************ 00:07:40.702 END TEST dd_bs_lt_native_bs 00:07:40.702 ************************************ 00:07:40.702 06:00:06 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:40.702 06:00:06 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@10 -- # set +x 00:07:40.967 06:00:06 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@103 -- # run_test dd_rw basic_rw 4096 00:07:40.967 06:00:06 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:40.967 06:00:06 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:40.967 06:00:06 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:07:40.967 ************************************ 00:07:40.967 START TEST dd_rw 00:07:40.967 ************************************ 00:07:40.967 06:00:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1125 -- # basic_rw 4096 00:07:40.967 06:00:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@11 -- # local native_bs=4096 00:07:40.967 06:00:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@12 -- # local count size 00:07:40.967 06:00:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@13 -- # local qds bss 00:07:40.967 06:00:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@15 -- # qds=(1 64) 00:07:40.967 06:00:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:07:40.967 06:00:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:07:40.967 06:00:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:07:40.967 06:00:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:07:40.967 06:00:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:07:40.967 06:00:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:07:40.967 06:00:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:07:40.967 06:00:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:07:40.967 06:00:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:07:40.967 06:00:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:07:40.967 06:00:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:07:40.968 06:00:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:07:40.968 06:00:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:07:40.968 06:00:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:41.536 06:00:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=1 --json /dev/fd/62 00:07:41.536 06:00:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:07:41.536 06:00:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:41.536 06:00:06 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:41.536 { 00:07:41.536 "subsystems": [ 00:07:41.536 { 00:07:41.536 "subsystem": "bdev", 00:07:41.536 "config": [ 00:07:41.536 { 00:07:41.536 "params": { 00:07:41.536 "trtype": "pcie", 00:07:41.536 "traddr": "0000:00:10.0", 00:07:41.536 "name": "Nvme0" 00:07:41.536 }, 00:07:41.536 "method": "bdev_nvme_attach_controller" 00:07:41.536 }, 00:07:41.536 { 00:07:41.536 "method": "bdev_wait_for_examine" 00:07:41.536 } 00:07:41.536 ] 00:07:41.536 } 00:07:41.536 ] 00:07:41.536 } 00:07:41.536 [2024-10-01 06:00:07.006041] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:07:41.536 [2024-10-01 06:00:07.006178] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71797 ] 00:07:41.536 [2024-10-01 06:00:07.144333] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:41.796 [2024-10-01 06:00:07.189729] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.796 [2024-10-01 06:00:07.226445] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:42.056  Copying: 60/60 [kB] (average 19 MBps) 00:07:42.056 00:07:42.056 06:00:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=1 --count=15 --json /dev/fd/62 00:07:42.056 06:00:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:07:42.056 06:00:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:42.056 06:00:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:42.056 { 00:07:42.056 "subsystems": [ 00:07:42.056 { 00:07:42.056 "subsystem": "bdev", 00:07:42.056 "config": [ 00:07:42.056 { 00:07:42.056 "params": { 00:07:42.056 "trtype": "pcie", 00:07:42.056 "traddr": "0000:00:10.0", 00:07:42.056 "name": "Nvme0" 00:07:42.056 }, 00:07:42.056 "method": "bdev_nvme_attach_controller" 00:07:42.056 }, 00:07:42.056 { 00:07:42.056 "method": "bdev_wait_for_examine" 00:07:42.056 } 00:07:42.056 ] 00:07:42.056 } 00:07:42.056 ] 00:07:42.056 } 00:07:42.056 [2024-10-01 06:00:07.559680] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:07:42.056 [2024-10-01 06:00:07.560429] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71810 ] 00:07:42.315 [2024-10-01 06:00:07.703734] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:42.315 [2024-10-01 06:00:07.737669] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:42.315 [2024-10-01 06:00:07.765819] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:42.575  Copying: 60/60 [kB] (average 19 MBps) 00:07:42.575 00:07:42.575 06:00:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:42.575 06:00:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:07:42.575 06:00:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:42.575 06:00:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:07:42.575 06:00:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:07:42.575 06:00:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:07:42.575 06:00:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:07:42.575 06:00:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:42.575 06:00:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:07:42.575 06:00:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:42.575 06:00:07 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:42.575 [2024-10-01 06:00:08.044231] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:07:42.575 [2024-10-01 06:00:08.044341] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71830 ] 00:07:42.575 { 00:07:42.575 "subsystems": [ 00:07:42.575 { 00:07:42.575 "subsystem": "bdev", 00:07:42.575 "config": [ 00:07:42.575 { 00:07:42.575 "params": { 00:07:42.575 "trtype": "pcie", 00:07:42.575 "traddr": "0000:00:10.0", 00:07:42.575 "name": "Nvme0" 00:07:42.575 }, 00:07:42.575 "method": "bdev_nvme_attach_controller" 00:07:42.575 }, 00:07:42.575 { 00:07:42.575 "method": "bdev_wait_for_examine" 00:07:42.575 } 00:07:42.575 ] 00:07:42.575 } 00:07:42.575 ] 00:07:42.575 } 00:07:42.575 [2024-10-01 06:00:08.179629] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:42.835 [2024-10-01 06:00:08.216670] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:42.835 [2024-10-01 06:00:08.244517] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:43.094  Copying: 1024/1024 [kB] (average 1000 MBps) 00:07:43.094 00:07:43.094 06:00:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:07:43.094 06:00:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:07:43.095 06:00:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:07:43.095 06:00:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:07:43.095 06:00:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:07:43.095 06:00:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:07:43.095 06:00:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:43.663 06:00:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=64 --json /dev/fd/62 00:07:43.663 06:00:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:07:43.663 06:00:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:43.663 06:00:08 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:43.663 { 00:07:43.663 "subsystems": [ 00:07:43.663 { 00:07:43.663 "subsystem": "bdev", 00:07:43.663 "config": [ 00:07:43.663 { 00:07:43.663 "params": { 00:07:43.663 "trtype": "pcie", 00:07:43.663 "traddr": "0000:00:10.0", 00:07:43.663 "name": "Nvme0" 00:07:43.663 }, 00:07:43.663 "method": "bdev_nvme_attach_controller" 00:07:43.663 }, 00:07:43.663 { 00:07:43.663 "method": "bdev_wait_for_examine" 00:07:43.663 } 00:07:43.663 ] 00:07:43.663 } 00:07:43.663 ] 00:07:43.663 } 00:07:43.663 [2024-10-01 06:00:09.025931] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:07:43.663 [2024-10-01 06:00:09.026027] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71845 ] 00:07:43.663 [2024-10-01 06:00:09.164882] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:43.663 [2024-10-01 06:00:09.202438] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:43.663 [2024-10-01 06:00:09.232665] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:43.921  Copying: 60/60 [kB] (average 58 MBps) 00:07:43.921 00:07:43.921 06:00:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:07:43.921 06:00:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=64 --count=15 --json /dev/fd/62 00:07:43.921 06:00:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:43.921 06:00:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:43.921 { 00:07:43.921 "subsystems": [ 00:07:43.921 { 00:07:43.921 "subsystem": "bdev", 00:07:43.921 "config": [ 00:07:43.921 { 00:07:43.921 "params": { 00:07:43.921 "trtype": "pcie", 00:07:43.921 "traddr": "0000:00:10.0", 00:07:43.921 "name": "Nvme0" 00:07:43.921 }, 00:07:43.922 "method": "bdev_nvme_attach_controller" 00:07:43.922 }, 00:07:43.922 { 00:07:43.922 "method": "bdev_wait_for_examine" 00:07:43.922 } 00:07:43.922 ] 00:07:43.922 } 00:07:43.922 ] 00:07:43.922 } 00:07:43.922 [2024-10-01 06:00:09.513362] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:07:43.922 [2024-10-01 06:00:09.513456] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71858 ] 00:07:44.181 [2024-10-01 06:00:09.653231] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:44.181 [2024-10-01 06:00:09.690866] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:44.181 [2024-10-01 06:00:09.721100] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:44.440  Copying: 60/60 [kB] (average 58 MBps) 00:07:44.440 00:07:44.440 06:00:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:44.440 06:00:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:07:44.440 06:00:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:44.440 06:00:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:07:44.440 06:00:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:07:44.440 06:00:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:07:44.440 06:00:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:07:44.440 06:00:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:44.440 06:00:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:07:44.440 06:00:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:44.440 06:00:09 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:44.440 [2024-10-01 06:00:09.998561] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:07:44.440 [2024-10-01 06:00:09.998819] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71874 ] 00:07:44.440 { 00:07:44.440 "subsystems": [ 00:07:44.440 { 00:07:44.440 "subsystem": "bdev", 00:07:44.440 "config": [ 00:07:44.440 { 00:07:44.440 "params": { 00:07:44.440 "trtype": "pcie", 00:07:44.440 "traddr": "0000:00:10.0", 00:07:44.440 "name": "Nvme0" 00:07:44.440 }, 00:07:44.440 "method": "bdev_nvme_attach_controller" 00:07:44.440 }, 00:07:44.440 { 00:07:44.440 "method": "bdev_wait_for_examine" 00:07:44.440 } 00:07:44.440 ] 00:07:44.440 } 00:07:44.440 ] 00:07:44.440 } 00:07:44.699 [2024-10-01 06:00:10.142797] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:44.699 [2024-10-01 06:00:10.181434] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:44.699 [2024-10-01 06:00:10.212300] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:44.958  Copying: 1024/1024 [kB] (average 1000 MBps) 00:07:44.958 00:07:44.958 06:00:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:07:44.959 06:00:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:07:44.959 06:00:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:07:44.959 06:00:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:07:44.959 06:00:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:07:44.959 06:00:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:07:44.959 06:00:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:07:44.959 06:00:10 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:45.527 06:00:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=1 --json /dev/fd/62 00:07:45.527 06:00:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:07:45.527 06:00:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:45.527 06:00:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:45.527 [2024-10-01 06:00:11.115074] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:07:45.527 [2024-10-01 06:00:11.115166] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71893 ] 00:07:45.527 { 00:07:45.527 "subsystems": [ 00:07:45.527 { 00:07:45.527 "subsystem": "bdev", 00:07:45.527 "config": [ 00:07:45.527 { 00:07:45.527 "params": { 00:07:45.527 "trtype": "pcie", 00:07:45.527 "traddr": "0000:00:10.0", 00:07:45.527 "name": "Nvme0" 00:07:45.527 }, 00:07:45.527 "method": "bdev_nvme_attach_controller" 00:07:45.527 }, 00:07:45.527 { 00:07:45.527 "method": "bdev_wait_for_examine" 00:07:45.527 } 00:07:45.527 ] 00:07:45.527 } 00:07:45.527 ] 00:07:45.527 } 00:07:45.785 [2024-10-01 06:00:11.252473] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:45.785 [2024-10-01 06:00:11.288956] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:45.785 [2024-10-01 06:00:11.321562] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:46.044  Copying: 56/56 [kB] (average 27 MBps) 00:07:46.044 00:07:46.044 06:00:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=1 --count=7 --json /dev/fd/62 00:07:46.044 06:00:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:07:46.044 06:00:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:46.044 06:00:11 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:46.044 [2024-10-01 06:00:11.597530] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:07:46.044 [2024-10-01 06:00:11.597806] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71906 ] 00:07:46.044 { 00:07:46.044 "subsystems": [ 00:07:46.044 { 00:07:46.044 "subsystem": "bdev", 00:07:46.044 "config": [ 00:07:46.044 { 00:07:46.044 "params": { 00:07:46.044 "trtype": "pcie", 00:07:46.044 "traddr": "0000:00:10.0", 00:07:46.044 "name": "Nvme0" 00:07:46.044 }, 00:07:46.044 "method": "bdev_nvme_attach_controller" 00:07:46.044 }, 00:07:46.044 { 00:07:46.044 "method": "bdev_wait_for_examine" 00:07:46.044 } 00:07:46.044 ] 00:07:46.044 } 00:07:46.044 ] 00:07:46.044 } 00:07:46.304 [2024-10-01 06:00:11.735902] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:46.304 [2024-10-01 06:00:11.769332] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:46.304 [2024-10-01 06:00:11.796870] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:46.563  Copying: 56/56 [kB] (average 18 MBps) 00:07:46.563 00:07:46.563 06:00:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:46.563 06:00:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:07:46.563 06:00:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:46.563 06:00:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:07:46.563 06:00:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:07:46.563 06:00:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:07:46.563 06:00:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:07:46.563 06:00:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:46.563 06:00:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:07:46.563 06:00:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:46.563 06:00:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:46.563 [2024-10-01 06:00:12.074748] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:07:46.563 [2024-10-01 06:00:12.074842] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71922 ] 00:07:46.563 { 00:07:46.563 "subsystems": [ 00:07:46.563 { 00:07:46.563 "subsystem": "bdev", 00:07:46.563 "config": [ 00:07:46.563 { 00:07:46.563 "params": { 00:07:46.563 "trtype": "pcie", 00:07:46.563 "traddr": "0000:00:10.0", 00:07:46.563 "name": "Nvme0" 00:07:46.563 }, 00:07:46.563 "method": "bdev_nvme_attach_controller" 00:07:46.563 }, 00:07:46.563 { 00:07:46.563 "method": "bdev_wait_for_examine" 00:07:46.563 } 00:07:46.563 ] 00:07:46.563 } 00:07:46.563 ] 00:07:46.563 } 00:07:46.822 [2024-10-01 06:00:12.212879] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:46.822 [2024-10-01 06:00:12.254176] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:46.822 [2024-10-01 06:00:12.288742] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:47.080  Copying: 1024/1024 [kB] (average 1000 MBps) 00:07:47.080 00:07:47.080 06:00:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:07:47.080 06:00:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:07:47.080 06:00:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:07:47.080 06:00:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:07:47.080 06:00:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:07:47.080 06:00:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:07:47.080 06:00:12 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:47.647 06:00:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=64 --json /dev/fd/62 00:07:47.647 06:00:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:07:47.647 06:00:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:47.647 06:00:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:47.647 [2024-10-01 06:00:13.077856] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:07:47.647 [2024-10-01 06:00:13.078122] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71941 ] 00:07:47.647 { 00:07:47.647 "subsystems": [ 00:07:47.647 { 00:07:47.647 "subsystem": "bdev", 00:07:47.647 "config": [ 00:07:47.647 { 00:07:47.647 "params": { 00:07:47.647 "trtype": "pcie", 00:07:47.647 "traddr": "0000:00:10.0", 00:07:47.647 "name": "Nvme0" 00:07:47.647 }, 00:07:47.647 "method": "bdev_nvme_attach_controller" 00:07:47.647 }, 00:07:47.647 { 00:07:47.647 "method": "bdev_wait_for_examine" 00:07:47.647 } 00:07:47.647 ] 00:07:47.647 } 00:07:47.647 ] 00:07:47.647 } 00:07:47.647 [2024-10-01 06:00:13.217185] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:47.647 [2024-10-01 06:00:13.262292] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:47.907 [2024-10-01 06:00:13.298148] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:47.907  Copying: 56/56 [kB] (average 54 MBps) 00:07:47.907 00:07:48.166 06:00:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=64 --count=7 --json /dev/fd/62 00:07:48.166 06:00:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:07:48.166 06:00:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:48.166 06:00:13 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:48.166 [2024-10-01 06:00:13.576364] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:07:48.166 [2024-10-01 06:00:13.576600] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71954 ] 00:07:48.166 { 00:07:48.166 "subsystems": [ 00:07:48.166 { 00:07:48.166 "subsystem": "bdev", 00:07:48.166 "config": [ 00:07:48.166 { 00:07:48.166 "params": { 00:07:48.166 "trtype": "pcie", 00:07:48.166 "traddr": "0000:00:10.0", 00:07:48.166 "name": "Nvme0" 00:07:48.166 }, 00:07:48.166 "method": "bdev_nvme_attach_controller" 00:07:48.166 }, 00:07:48.166 { 00:07:48.166 "method": "bdev_wait_for_examine" 00:07:48.166 } 00:07:48.166 ] 00:07:48.166 } 00:07:48.166 ] 00:07:48.166 } 00:07:48.166 [2024-10-01 06:00:13.714689] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:48.166 [2024-10-01 06:00:13.760087] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:48.425 [2024-10-01 06:00:13.793695] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:48.425  Copying: 56/56 [kB] (average 54 MBps) 00:07:48.425 00:07:48.425 06:00:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:48.425 06:00:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:07:48.425 06:00:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:48.425 06:00:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:07:48.425 06:00:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:07:48.425 06:00:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:07:48.425 06:00:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:07:48.425 06:00:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:48.425 06:00:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:07:48.425 06:00:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:48.425 06:00:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:48.685 [2024-10-01 06:00:14.082406] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:07:48.685 [2024-10-01 06:00:14.082510] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71970 ] 00:07:48.685 { 00:07:48.685 "subsystems": [ 00:07:48.685 { 00:07:48.685 "subsystem": "bdev", 00:07:48.685 "config": [ 00:07:48.685 { 00:07:48.685 "params": { 00:07:48.685 "trtype": "pcie", 00:07:48.685 "traddr": "0000:00:10.0", 00:07:48.685 "name": "Nvme0" 00:07:48.685 }, 00:07:48.685 "method": "bdev_nvme_attach_controller" 00:07:48.685 }, 00:07:48.685 { 00:07:48.685 "method": "bdev_wait_for_examine" 00:07:48.685 } 00:07:48.685 ] 00:07:48.685 } 00:07:48.685 ] 00:07:48.685 } 00:07:48.685 [2024-10-01 06:00:14.221087] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:48.685 [2024-10-01 06:00:14.265443] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:48.685 [2024-10-01 06:00:14.298936] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:48.944  Copying: 1024/1024 [kB] (average 1000 MBps) 00:07:48.944 00:07:48.944 06:00:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:07:48.944 06:00:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:07:48.944 06:00:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:07:48.944 06:00:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:07:48.944 06:00:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:07:48.944 06:00:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:07:48.944 06:00:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:07:48.944 06:00:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:49.512 06:00:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=1 --json /dev/fd/62 00:07:49.512 06:00:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:07:49.512 06:00:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:49.512 06:00:14 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:49.512 [2024-10-01 06:00:15.012983] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:07:49.512 [2024-10-01 06:00:15.013253] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71989 ] 00:07:49.512 { 00:07:49.512 "subsystems": [ 00:07:49.512 { 00:07:49.512 "subsystem": "bdev", 00:07:49.512 "config": [ 00:07:49.512 { 00:07:49.512 "params": { 00:07:49.512 "trtype": "pcie", 00:07:49.512 "traddr": "0000:00:10.0", 00:07:49.512 "name": "Nvme0" 00:07:49.512 }, 00:07:49.512 "method": "bdev_nvme_attach_controller" 00:07:49.512 }, 00:07:49.512 { 00:07:49.512 "method": "bdev_wait_for_examine" 00:07:49.512 } 00:07:49.512 ] 00:07:49.512 } 00:07:49.512 ] 00:07:49.512 } 00:07:49.771 [2024-10-01 06:00:15.152012] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:49.771 [2024-10-01 06:00:15.192344] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:49.771 [2024-10-01 06:00:15.224965] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:50.030  Copying: 48/48 [kB] (average 46 MBps) 00:07:50.030 00:07:50.030 06:00:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=1 --count=3 --json /dev/fd/62 00:07:50.030 06:00:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:07:50.030 06:00:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:50.030 06:00:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:50.030 { 00:07:50.030 "subsystems": [ 00:07:50.030 { 00:07:50.030 "subsystem": "bdev", 00:07:50.030 "config": [ 00:07:50.030 { 00:07:50.030 "params": { 00:07:50.030 "trtype": "pcie", 00:07:50.030 "traddr": "0000:00:10.0", 00:07:50.030 "name": "Nvme0" 00:07:50.030 }, 00:07:50.030 "method": "bdev_nvme_attach_controller" 00:07:50.030 }, 00:07:50.030 { 00:07:50.030 "method": "bdev_wait_for_examine" 00:07:50.030 } 00:07:50.030 ] 00:07:50.030 } 00:07:50.030 ] 00:07:50.030 } 00:07:50.030 [2024-10-01 06:00:15.508641] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:07:50.030 [2024-10-01 06:00:15.508765] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72002 ] 00:07:50.289 [2024-10-01 06:00:15.648819] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:50.289 [2024-10-01 06:00:15.684424] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:50.289 [2024-10-01 06:00:15.715547] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:50.548  Copying: 48/48 [kB] (average 46 MBps) 00:07:50.548 00:07:50.548 06:00:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:50.548 06:00:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:07:50.548 06:00:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:50.548 06:00:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:07:50.548 06:00:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:07:50.548 06:00:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:07:50.548 06:00:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:07:50.548 06:00:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:50.548 06:00:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:07:50.548 06:00:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:50.548 06:00:15 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:50.548 [2024-10-01 06:00:15.999930] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:07:50.548 [2024-10-01 06:00:16.000281] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72018 ] 00:07:50.548 { 00:07:50.548 "subsystems": [ 00:07:50.548 { 00:07:50.548 "subsystem": "bdev", 00:07:50.548 "config": [ 00:07:50.548 { 00:07:50.548 "params": { 00:07:50.548 "trtype": "pcie", 00:07:50.548 "traddr": "0000:00:10.0", 00:07:50.548 "name": "Nvme0" 00:07:50.548 }, 00:07:50.548 "method": "bdev_nvme_attach_controller" 00:07:50.548 }, 00:07:50.548 { 00:07:50.548 "method": "bdev_wait_for_examine" 00:07:50.549 } 00:07:50.549 ] 00:07:50.549 } 00:07:50.549 ] 00:07:50.549 } 00:07:50.549 [2024-10-01 06:00:16.147472] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:50.807 [2024-10-01 06:00:16.184289] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:50.807 [2024-10-01 06:00:16.213761] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:51.065  Copying: 1024/1024 [kB] (average 1000 MBps) 00:07:51.065 00:07:51.065 06:00:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:07:51.065 06:00:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:07:51.065 06:00:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:07:51.065 06:00:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:07:51.065 06:00:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:07:51.065 06:00:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:07:51.065 06:00:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:51.324 06:00:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=64 --json /dev/fd/62 00:07:51.324 06:00:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:07:51.324 06:00:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:51.324 06:00:16 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:51.582 [2024-10-01 06:00:16.980322] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:07:51.582 [2024-10-01 06:00:16.980587] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72037 ] 00:07:51.582 { 00:07:51.582 "subsystems": [ 00:07:51.582 { 00:07:51.582 "subsystem": "bdev", 00:07:51.582 "config": [ 00:07:51.582 { 00:07:51.582 "params": { 00:07:51.582 "trtype": "pcie", 00:07:51.582 "traddr": "0000:00:10.0", 00:07:51.582 "name": "Nvme0" 00:07:51.582 }, 00:07:51.582 "method": "bdev_nvme_attach_controller" 00:07:51.582 }, 00:07:51.582 { 00:07:51.582 "method": "bdev_wait_for_examine" 00:07:51.582 } 00:07:51.582 ] 00:07:51.582 } 00:07:51.582 ] 00:07:51.582 } 00:07:51.582 [2024-10-01 06:00:17.117265] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:51.582 [2024-10-01 06:00:17.152568] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:51.582 [2024-10-01 06:00:17.179834] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:51.841  Copying: 48/48 [kB] (average 46 MBps) 00:07:51.841 00:07:51.841 06:00:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=64 --count=3 --json /dev/fd/62 00:07:51.841 06:00:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:07:51.841 06:00:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:51.841 06:00:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:52.099 [2024-10-01 06:00:17.460403] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:07:52.099 [2024-10-01 06:00:17.460503] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72049 ] 00:07:52.099 { 00:07:52.099 "subsystems": [ 00:07:52.099 { 00:07:52.099 "subsystem": "bdev", 00:07:52.099 "config": [ 00:07:52.099 { 00:07:52.099 "params": { 00:07:52.099 "trtype": "pcie", 00:07:52.099 "traddr": "0000:00:10.0", 00:07:52.099 "name": "Nvme0" 00:07:52.099 }, 00:07:52.099 "method": "bdev_nvme_attach_controller" 00:07:52.099 }, 00:07:52.099 { 00:07:52.099 "method": "bdev_wait_for_examine" 00:07:52.099 } 00:07:52.099 ] 00:07:52.099 } 00:07:52.099 ] 00:07:52.099 } 00:07:52.099 [2024-10-01 06:00:17.595643] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:52.099 [2024-10-01 06:00:17.636150] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:52.099 [2024-10-01 06:00:17.667865] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:52.367  Copying: 48/48 [kB] (average 46 MBps) 00:07:52.367 00:07:52.367 06:00:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:52.367 06:00:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:07:52.367 06:00:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:52.367 06:00:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:07:52.367 06:00:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:07:52.367 06:00:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:07:52.367 06:00:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:07:52.367 06:00:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:52.367 06:00:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:07:52.367 06:00:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:52.367 06:00:17 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:52.367 [2024-10-01 06:00:17.956929] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:07:52.367 [2024-10-01 06:00:17.957188] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72066 ] 00:07:52.367 { 00:07:52.367 "subsystems": [ 00:07:52.367 { 00:07:52.367 "subsystem": "bdev", 00:07:52.367 "config": [ 00:07:52.367 { 00:07:52.367 "params": { 00:07:52.367 "trtype": "pcie", 00:07:52.367 "traddr": "0000:00:10.0", 00:07:52.367 "name": "Nvme0" 00:07:52.368 }, 00:07:52.368 "method": "bdev_nvme_attach_controller" 00:07:52.368 }, 00:07:52.368 { 00:07:52.368 "method": "bdev_wait_for_examine" 00:07:52.368 } 00:07:52.368 ] 00:07:52.368 } 00:07:52.368 ] 00:07:52.368 } 00:07:52.638 [2024-10-01 06:00:18.096650] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:52.638 [2024-10-01 06:00:18.131623] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:52.638 [2024-10-01 06:00:18.160186] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:52.897  Copying: 1024/1024 [kB] (average 1000 MBps) 00:07:52.897 00:07:52.897 00:07:52.897 real 0m12.024s 00:07:52.897 user 0m8.842s 00:07:52.897 sys 0m3.784s 00:07:52.897 06:00:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:52.897 ************************************ 00:07:52.897 END TEST dd_rw 00:07:52.897 ************************************ 00:07:52.897 06:00:18 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:52.897 06:00:18 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@104 -- # run_test dd_rw_offset basic_offset 00:07:52.897 06:00:18 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:52.897 06:00:18 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:52.897 06:00:18 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:07:52.897 ************************************ 00:07:52.897 START TEST dd_rw_offset 00:07:52.897 ************************************ 00:07:52.897 06:00:18 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1125 -- # basic_offset 00:07:52.897 06:00:18 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@52 -- # local count seek skip data data_check 00:07:52.897 06:00:18 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@54 -- # gen_bytes 4096 00:07:52.897 06:00:18 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@98 -- # xtrace_disable 00:07:52.897 06:00:18 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:07:52.897 06:00:18 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@55 -- # (( count = seek = skip = 1 )) 00:07:52.898 06:00:18 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@56 -- # data=rng5794ivbqf92yml25kgf2c153hexewhfwqnz1nr81fodgtrnfc8bnqrjod3812zydvi79gkb3m6xms1e6rfsv6cq8cin7vfyfgjredjzeeys3de0dn9janmgj8h1pga7mb80zpltyahehu8lcdle8g9drwi4p0hr7yn570q8larb9ugrz9syub2af02iq774smuk1zlr92shlj382zic0laq2xs0ws5xr0nc0a1ai97rltagf4zotwz34t7hmn9re5x555pf40iibutmnarom4vqlmqeyxb4rcmzgs9k47vo5ce5qzbntkifkjxc2hmoblskcnz01lcdv6tewpn9ux1207lk2fqbop71697geoqbsqnakd4w19xok5bn2n0k5z15vgnl4pngj3c6wooxghu0ia1o3mh3dsmqw9lzqj4gbd0kazdnwqf5h2w7eqxcc8b5t8277kddwrcw6vo9yq8h4qgj1vgvpfutpdkxihl2mdugmeg5knuar6tze1zouhzdzmgtq4j02klzqws1omitpyz9c58cmqjxin9vd491pn0dwmyvrjvwksfs70bcubqgq7bemc9qnjyr4yhrk7jymkcd46j3ucq0lkx81drdccbkso5jqcghj3txyo1383g2kixsv1sas69uxceggsk5vin3kt41620gj97r79qq6uixw3npcb6r4vni783st0lz8p6mfza38c7b4b26keibtw0emayns9pu6tzk97yi8608ne2onj77zmvc6y9wuxduz63ii78rpaxawda4gniq38n8n9a8exrrykrvipi5trd1y12xdkl3mrjvrykxh0inl0992cl3jzkxaq8is0t809uolq4784begnan2hglo1n6ujr0i3pl74pqzgnloqoersnnxg962zpktr68ygzd2vclc2h4ftrjy6m4cps2649b5fvdmerb19f3mi2rtj36euuzdrd15bzd1dzwwt20o1pem3luub9wg6db2yh5usbz18tthvai5dcqjkz4w02nh9cqsexumzt2vuonq0fkn9692d0gdk62753xbqdihn72xcgdmujbpgvz97kumaqg4ojv3k4a6ic3ji72d4b6rn80meln0094uaae417bm76asbhr578fvogc12mt87ouqwmqtv94009p7zc9325s1w1fq8g08h322ee2snyb1n70a5iy1ov64p8xlmang2gispjyjcshu586i6k0scdd011qmukhuni4theac34o7mi5afoihkfzpjlfxv1u6kjx8u6s7kg9ktb6rszjnzwubcc66lk87ka2rybpedm2nmpmuj4n63izjzv52olhbsvrvezqw9xgq6n58mdqem8ohqcc0omjm5lslicqj6uqt2muwot3gapgy6ecf7dv5wc1fobp85vzcc567ll50rek7qlyu5pmty5fi3o23ds3iimezabn60bl97smb4jzb05l9t8dwkzg53friztszpftdpjgutvhrk4dizjb8k5a7o4kn1yohioq4qnkl9zs8anze5jb9ib0447stla5louwap836ftgoqzyvupq3bwjg0kyn2zk4ap07ipimbst5a43yc7ah7ljqajv2stwloi2m4h6qjj85l5ajawraow03gyy0vlvck23gat691b41w1q1xza7twihohfwx17agw07vfbkk0cffwlvwg3q1bueimzh7v9j30n2cl3q4kr6559zyif8tk5hgkad0kmuirmv77ikzuxx2wmnc1qab8qwwwvswz1ewg3i6sn24hvml5haysw4mjp6jwggns29qenc4d7eou8fyjgcjxdv5y4121yxlzmlk4cx3gnlnoagpa8nzzsx95toxj4o7zamowyejj7u3y6rz8webhxqnehzlljha57v0tvugym2ndrdlziqcmvqp1z71s9eqvtiuuuhucxvmbgmmpt4gyo2cxnisw9d1blq3g2m2o3eqfyxt05r20q3mp9tujrnpxb9ys93wsrtdpbvsn4fvjs9lqgy6gkopsnbod0iyxu9ckkayc3y4lbprrk1d3axxikt612rgg8wkkj54vr1ktd7d2pqym9w7hf5ja9cssgkogwfpzdr23d0ehhgbhgtunrpe7wjy5kx7511zrmy3k07zie6t2mb39df4qe2fa1rtd5s81giq0iezccu4a2z2ua9k3aj8h8qcalvq0ziux74v9wt703qddnhoenbv5g3gzwigukl55xfruh46yyons2lq62i73sxc4ueqvjzmz4dds0vlkwxzlvh0ucfo262ql7wfbvxy6q7rgmkcikh4r945b0dlfkb1jtibryfm97c7qcuj0ouvo30g1o9zhw3p2bpapt7duhz7yoz7xk5f5l80gb84p39s431f4ce4m8exg4tn42xe6kjykc2nii7pleu6odjamf6s80hxmfsn9bvxf2e4bp5p1x1ud1ue0urvcs522u1bgf11x8gd5vz7zsr58paayx93f5zew3i2qbmui73oehl5msovq9lgtq8cii1rfn6g2w40kkk1j5ld6md0buyiw3qjkxis9wu40xa8adh3fofsh0kzstybgkvxsgtorv54n502nf0xgxskxhy8tnd1583ui6j21lzttxwuu0b8xvonnyp96d9w4xzdysluuaybogh404qgral1msva7xh5rkxe76s1u473haiesn4inveyum0bzggj4w6d1bt9u51lqic3xyds8tyjcgm9okn2odr12pocb8cwegkrwg5whggsnw3jqgu9ntj99s71y7c9z9ij45c5nywjz7kscdogrwrtcpqorcv0k316asy3iuv620zt0j0ooshugueni1xdo4xdmjnp3zf2rog0its72m0ddki0zf68eesd1k9l6vz6pl1uzn5dk7mdjj8s7co9hjeji6pzvcvdnfdc0pvnv6y0lombrmx5f3feunuiegbppiryb8wufqlceyacpxaa07dq99c1hwkjzg2kzhngj6ah43j3o3bbgmbnxdpl2how2crd3lvdkik03g0zxc9g6llxz53gkcp2y1o851e2b5z66o3n8mmzi8x7azieb4lklp7tabmex7x6npr9mg9y1o9f966uonm4620o6zhlcy8x5u9iht9aodl3tpphwqcc5k2rikp1n6pinxeuxn1aaprlxlgl5ikfuukkfl1b9cmika7d4486cy597py6oq1uzloxco27zvaoagyzhq8ia686gacsjhhl8qx412yxq7v0vaw5x3bzd0fiv3rxnrjoy0t98beiw8yj100mm6xdebtklp7mxwxlt5p2th6e7p3adyhibttzzu9w2z19lrsbhyw1of3im83nr6yb9stkfq68mpxm5agzq0snk52g71yd1nqoh4rak8be32t7sqqh8z6sgxnozjjdduxapmzb3tse8hwdgjk3pnwiu8kqttz1v23ytai087b58w458azm6oc34q5o2kio80afkg2orswwkm80t15u8e7rp91ss2vxxml0dn876y7a7uuhsifon06j4vqjci7y0hvy7rwpq149j0y51tz9f83m2airnhrrw9gpqlthnxsf7d7bj3je3tdf5o8epyxco8syedjca8j5uhyzj2zhhw6wy61ay75gn2ty7ut2j8xlow011pnygl2ozn3mg9gt8q9klrbzvcja3yw7tufz64yh0ti3hv383birn7of9jsud8lfdmqz1s550etziiatwaiyuq62uha12b1vdr19vyztvvmpgzoixyyv5frjzqudgzio39or77b7qp9rhrtjmjbe8dgkvwns67sw60ux2ezrqe9r5r0u6ojvu2yiijvlnyyzlhoa3fu3t1tk1kpnmgu17stxdt9n5e62v7e2yvbtplvaz7h5bd3mv5hiwtn04q225lea1qm7yek74j3xc869xaf0ccfzliqs5pfwbpn2ozbgc5vklij5cr35frhj90q00u9x69yvw7j1domcnhimmpei7y6jbb9exwqbdbhffynvcql59nzlp8z8yqk8dagln2x7vf358syzcvovmwfnbw9mxz0a7i8hjuza 00:07:52.898 06:00:18 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --seek=1 --json /dev/fd/62 00:07:52.898 06:00:18 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # gen_conf 00:07:52.898 06:00:18 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:07:52.898 06:00:18 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:07:53.156 [2024-10-01 06:00:18.525335] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:07:53.156 [2024-10-01 06:00:18.525431] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72095 ] 00:07:53.156 { 00:07:53.156 "subsystems": [ 00:07:53.156 { 00:07:53.156 "subsystem": "bdev", 00:07:53.156 "config": [ 00:07:53.156 { 00:07:53.156 "params": { 00:07:53.156 "trtype": "pcie", 00:07:53.156 "traddr": "0000:00:10.0", 00:07:53.156 "name": "Nvme0" 00:07:53.156 }, 00:07:53.156 "method": "bdev_nvme_attach_controller" 00:07:53.156 }, 00:07:53.156 { 00:07:53.156 "method": "bdev_wait_for_examine" 00:07:53.156 } 00:07:53.156 ] 00:07:53.156 } 00:07:53.156 ] 00:07:53.156 } 00:07:53.156 [2024-10-01 06:00:18.668211] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:53.156 [2024-10-01 06:00:18.714177] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:53.156 [2024-10-01 06:00:18.751816] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:53.414  Copying: 4096/4096 [B] (average 4000 kBps) 00:07:53.414 00:07:53.414 06:00:18 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --skip=1 --count=1 --json /dev/fd/62 00:07:53.414 06:00:18 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # gen_conf 00:07:53.414 06:00:18 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:07:53.414 06:00:18 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:07:53.672 [2024-10-01 06:00:19.047800] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:07:53.672 [2024-10-01 06:00:19.047911] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72110 ] 00:07:53.672 { 00:07:53.672 "subsystems": [ 00:07:53.672 { 00:07:53.672 "subsystem": "bdev", 00:07:53.672 "config": [ 00:07:53.672 { 00:07:53.672 "params": { 00:07:53.672 "trtype": "pcie", 00:07:53.672 "traddr": "0000:00:10.0", 00:07:53.672 "name": "Nvme0" 00:07:53.672 }, 00:07:53.672 "method": "bdev_nvme_attach_controller" 00:07:53.672 }, 00:07:53.672 { 00:07:53.672 "method": "bdev_wait_for_examine" 00:07:53.672 } 00:07:53.672 ] 00:07:53.672 } 00:07:53.672 ] 00:07:53.672 } 00:07:53.672 [2024-10-01 06:00:19.184024] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:53.672 [2024-10-01 06:00:19.221593] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:53.672 [2024-10-01 06:00:19.252845] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:53.931  Copying: 4096/4096 [B] (average 4000 kBps) 00:07:53.931 00:07:53.931 06:00:19 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@71 -- # read -rn4096 data_check 00:07:53.932 06:00:19 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@72 -- # [[ rng5794ivbqf92yml25kgf2c153hexewhfwqnz1nr81fodgtrnfc8bnqrjod3812zydvi79gkb3m6xms1e6rfsv6cq8cin7vfyfgjredjzeeys3de0dn9janmgj8h1pga7mb80zpltyahehu8lcdle8g9drwi4p0hr7yn570q8larb9ugrz9syub2af02iq774smuk1zlr92shlj382zic0laq2xs0ws5xr0nc0a1ai97rltagf4zotwz34t7hmn9re5x555pf40iibutmnarom4vqlmqeyxb4rcmzgs9k47vo5ce5qzbntkifkjxc2hmoblskcnz01lcdv6tewpn9ux1207lk2fqbop71697geoqbsqnakd4w19xok5bn2n0k5z15vgnl4pngj3c6wooxghu0ia1o3mh3dsmqw9lzqj4gbd0kazdnwqf5h2w7eqxcc8b5t8277kddwrcw6vo9yq8h4qgj1vgvpfutpdkxihl2mdugmeg5knuar6tze1zouhzdzmgtq4j02klzqws1omitpyz9c58cmqjxin9vd491pn0dwmyvrjvwksfs70bcubqgq7bemc9qnjyr4yhrk7jymkcd46j3ucq0lkx81drdccbkso5jqcghj3txyo1383g2kixsv1sas69uxceggsk5vin3kt41620gj97r79qq6uixw3npcb6r4vni783st0lz8p6mfza38c7b4b26keibtw0emayns9pu6tzk97yi8608ne2onj77zmvc6y9wuxduz63ii78rpaxawda4gniq38n8n9a8exrrykrvipi5trd1y12xdkl3mrjvrykxh0inl0992cl3jzkxaq8is0t809uolq4784begnan2hglo1n6ujr0i3pl74pqzgnloqoersnnxg962zpktr68ygzd2vclc2h4ftrjy6m4cps2649b5fvdmerb19f3mi2rtj36euuzdrd15bzd1dzwwt20o1pem3luub9wg6db2yh5usbz18tthvai5dcqjkz4w02nh9cqsexumzt2vuonq0fkn9692d0gdk62753xbqdihn72xcgdmujbpgvz97kumaqg4ojv3k4a6ic3ji72d4b6rn80meln0094uaae417bm76asbhr578fvogc12mt87ouqwmqtv94009p7zc9325s1w1fq8g08h322ee2snyb1n70a5iy1ov64p8xlmang2gispjyjcshu586i6k0scdd011qmukhuni4theac34o7mi5afoihkfzpjlfxv1u6kjx8u6s7kg9ktb6rszjnzwubcc66lk87ka2rybpedm2nmpmuj4n63izjzv52olhbsvrvezqw9xgq6n58mdqem8ohqcc0omjm5lslicqj6uqt2muwot3gapgy6ecf7dv5wc1fobp85vzcc567ll50rek7qlyu5pmty5fi3o23ds3iimezabn60bl97smb4jzb05l9t8dwkzg53friztszpftdpjgutvhrk4dizjb8k5a7o4kn1yohioq4qnkl9zs8anze5jb9ib0447stla5louwap836ftgoqzyvupq3bwjg0kyn2zk4ap07ipimbst5a43yc7ah7ljqajv2stwloi2m4h6qjj85l5ajawraow03gyy0vlvck23gat691b41w1q1xza7twihohfwx17agw07vfbkk0cffwlvwg3q1bueimzh7v9j30n2cl3q4kr6559zyif8tk5hgkad0kmuirmv77ikzuxx2wmnc1qab8qwwwvswz1ewg3i6sn24hvml5haysw4mjp6jwggns29qenc4d7eou8fyjgcjxdv5y4121yxlzmlk4cx3gnlnoagpa8nzzsx95toxj4o7zamowyejj7u3y6rz8webhxqnehzlljha57v0tvugym2ndrdlziqcmvqp1z71s9eqvtiuuuhucxvmbgmmpt4gyo2cxnisw9d1blq3g2m2o3eqfyxt05r20q3mp9tujrnpxb9ys93wsrtdpbvsn4fvjs9lqgy6gkopsnbod0iyxu9ckkayc3y4lbprrk1d3axxikt612rgg8wkkj54vr1ktd7d2pqym9w7hf5ja9cssgkogwfpzdr23d0ehhgbhgtunrpe7wjy5kx7511zrmy3k07zie6t2mb39df4qe2fa1rtd5s81giq0iezccu4a2z2ua9k3aj8h8qcalvq0ziux74v9wt703qddnhoenbv5g3gzwigukl55xfruh46yyons2lq62i73sxc4ueqvjzmz4dds0vlkwxzlvh0ucfo262ql7wfbvxy6q7rgmkcikh4r945b0dlfkb1jtibryfm97c7qcuj0ouvo30g1o9zhw3p2bpapt7duhz7yoz7xk5f5l80gb84p39s431f4ce4m8exg4tn42xe6kjykc2nii7pleu6odjamf6s80hxmfsn9bvxf2e4bp5p1x1ud1ue0urvcs522u1bgf11x8gd5vz7zsr58paayx93f5zew3i2qbmui73oehl5msovq9lgtq8cii1rfn6g2w40kkk1j5ld6md0buyiw3qjkxis9wu40xa8adh3fofsh0kzstybgkvxsgtorv54n502nf0xgxskxhy8tnd1583ui6j21lzttxwuu0b8xvonnyp96d9w4xzdysluuaybogh404qgral1msva7xh5rkxe76s1u473haiesn4inveyum0bzggj4w6d1bt9u51lqic3xyds8tyjcgm9okn2odr12pocb8cwegkrwg5whggsnw3jqgu9ntj99s71y7c9z9ij45c5nywjz7kscdogrwrtcpqorcv0k316asy3iuv620zt0j0ooshugueni1xdo4xdmjnp3zf2rog0its72m0ddki0zf68eesd1k9l6vz6pl1uzn5dk7mdjj8s7co9hjeji6pzvcvdnfdc0pvnv6y0lombrmx5f3feunuiegbppiryb8wufqlceyacpxaa07dq99c1hwkjzg2kzhngj6ah43j3o3bbgmbnxdpl2how2crd3lvdkik03g0zxc9g6llxz53gkcp2y1o851e2b5z66o3n8mmzi8x7azieb4lklp7tabmex7x6npr9mg9y1o9f966uonm4620o6zhlcy8x5u9iht9aodl3tpphwqcc5k2rikp1n6pinxeuxn1aaprlxlgl5ikfuukkfl1b9cmika7d4486cy597py6oq1uzloxco27zvaoagyzhq8ia686gacsjhhl8qx412yxq7v0vaw5x3bzd0fiv3rxnrjoy0t98beiw8yj100mm6xdebtklp7mxwxlt5p2th6e7p3adyhibttzzu9w2z19lrsbhyw1of3im83nr6yb9stkfq68mpxm5agzq0snk52g71yd1nqoh4rak8be32t7sqqh8z6sgxnozjjdduxapmzb3tse8hwdgjk3pnwiu8kqttz1v23ytai087b58w458azm6oc34q5o2kio80afkg2orswwkm80t15u8e7rp91ss2vxxml0dn876y7a7uuhsifon06j4vqjci7y0hvy7rwpq149j0y51tz9f83m2airnhrrw9gpqlthnxsf7d7bj3je3tdf5o8epyxco8syedjca8j5uhyzj2zhhw6wy61ay75gn2ty7ut2j8xlow011pnygl2ozn3mg9gt8q9klrbzvcja3yw7tufz64yh0ti3hv383birn7of9jsud8lfdmqz1s550etziiatwaiyuq62uha12b1vdr19vyztvvmpgzoixyyv5frjzqudgzio39or77b7qp9rhrtjmjbe8dgkvwns67sw60ux2ezrqe9r5r0u6ojvu2yiijvlnyyzlhoa3fu3t1tk1kpnmgu17stxdt9n5e62v7e2yvbtplvaz7h5bd3mv5hiwtn04q225lea1qm7yek74j3xc869xaf0ccfzliqs5pfwbpn2ozbgc5vklij5cr35frhj90q00u9x69yvw7j1domcnhimmpei7y6jbb9exwqbdbhffynvcql59nzlp8z8yqk8dagln2x7vf358syzcvovmwfnbw9mxz0a7i8hjuza == \r\n\g\5\7\9\4\i\v\b\q\f\9\2\y\m\l\2\5\k\g\f\2\c\1\5\3\h\e\x\e\w\h\f\w\q\n\z\1\n\r\8\1\f\o\d\g\t\r\n\f\c\8\b\n\q\r\j\o\d\3\8\1\2\z\y\d\v\i\7\9\g\k\b\3\m\6\x\m\s\1\e\6\r\f\s\v\6\c\q\8\c\i\n\7\v\f\y\f\g\j\r\e\d\j\z\e\e\y\s\3\d\e\0\d\n\9\j\a\n\m\g\j\8\h\1\p\g\a\7\m\b\8\0\z\p\l\t\y\a\h\e\h\u\8\l\c\d\l\e\8\g\9\d\r\w\i\4\p\0\h\r\7\y\n\5\7\0\q\8\l\a\r\b\9\u\g\r\z\9\s\y\u\b\2\a\f\0\2\i\q\7\7\4\s\m\u\k\1\z\l\r\9\2\s\h\l\j\3\8\2\z\i\c\0\l\a\q\2\x\s\0\w\s\5\x\r\0\n\c\0\a\1\a\i\9\7\r\l\t\a\g\f\4\z\o\t\w\z\3\4\t\7\h\m\n\9\r\e\5\x\5\5\5\p\f\4\0\i\i\b\u\t\m\n\a\r\o\m\4\v\q\l\m\q\e\y\x\b\4\r\c\m\z\g\s\9\k\4\7\v\o\5\c\e\5\q\z\b\n\t\k\i\f\k\j\x\c\2\h\m\o\b\l\s\k\c\n\z\0\1\l\c\d\v\6\t\e\w\p\n\9\u\x\1\2\0\7\l\k\2\f\q\b\o\p\7\1\6\9\7\g\e\o\q\b\s\q\n\a\k\d\4\w\1\9\x\o\k\5\b\n\2\n\0\k\5\z\1\5\v\g\n\l\4\p\n\g\j\3\c\6\w\o\o\x\g\h\u\0\i\a\1\o\3\m\h\3\d\s\m\q\w\9\l\z\q\j\4\g\b\d\0\k\a\z\d\n\w\q\f\5\h\2\w\7\e\q\x\c\c\8\b\5\t\8\2\7\7\k\d\d\w\r\c\w\6\v\o\9\y\q\8\h\4\q\g\j\1\v\g\v\p\f\u\t\p\d\k\x\i\h\l\2\m\d\u\g\m\e\g\5\k\n\u\a\r\6\t\z\e\1\z\o\u\h\z\d\z\m\g\t\q\4\j\0\2\k\l\z\q\w\s\1\o\m\i\t\p\y\z\9\c\5\8\c\m\q\j\x\i\n\9\v\d\4\9\1\p\n\0\d\w\m\y\v\r\j\v\w\k\s\f\s\7\0\b\c\u\b\q\g\q\7\b\e\m\c\9\q\n\j\y\r\4\y\h\r\k\7\j\y\m\k\c\d\4\6\j\3\u\c\q\0\l\k\x\8\1\d\r\d\c\c\b\k\s\o\5\j\q\c\g\h\j\3\t\x\y\o\1\3\8\3\g\2\k\i\x\s\v\1\s\a\s\6\9\u\x\c\e\g\g\s\k\5\v\i\n\3\k\t\4\1\6\2\0\g\j\9\7\r\7\9\q\q\6\u\i\x\w\3\n\p\c\b\6\r\4\v\n\i\7\8\3\s\t\0\l\z\8\p\6\m\f\z\a\3\8\c\7\b\4\b\2\6\k\e\i\b\t\w\0\e\m\a\y\n\s\9\p\u\6\t\z\k\9\7\y\i\8\6\0\8\n\e\2\o\n\j\7\7\z\m\v\c\6\y\9\w\u\x\d\u\z\6\3\i\i\7\8\r\p\a\x\a\w\d\a\4\g\n\i\q\3\8\n\8\n\9\a\8\e\x\r\r\y\k\r\v\i\p\i\5\t\r\d\1\y\1\2\x\d\k\l\3\m\r\j\v\r\y\k\x\h\0\i\n\l\0\9\9\2\c\l\3\j\z\k\x\a\q\8\i\s\0\t\8\0\9\u\o\l\q\4\7\8\4\b\e\g\n\a\n\2\h\g\l\o\1\n\6\u\j\r\0\i\3\p\l\7\4\p\q\z\g\n\l\o\q\o\e\r\s\n\n\x\g\9\6\2\z\p\k\t\r\6\8\y\g\z\d\2\v\c\l\c\2\h\4\f\t\r\j\y\6\m\4\c\p\s\2\6\4\9\b\5\f\v\d\m\e\r\b\1\9\f\3\m\i\2\r\t\j\3\6\e\u\u\z\d\r\d\1\5\b\z\d\1\d\z\w\w\t\2\0\o\1\p\e\m\3\l\u\u\b\9\w\g\6\d\b\2\y\h\5\u\s\b\z\1\8\t\t\h\v\a\i\5\d\c\q\j\k\z\4\w\0\2\n\h\9\c\q\s\e\x\u\m\z\t\2\v\u\o\n\q\0\f\k\n\9\6\9\2\d\0\g\d\k\6\2\7\5\3\x\b\q\d\i\h\n\7\2\x\c\g\d\m\u\j\b\p\g\v\z\9\7\k\u\m\a\q\g\4\o\j\v\3\k\4\a\6\i\c\3\j\i\7\2\d\4\b\6\r\n\8\0\m\e\l\n\0\0\9\4\u\a\a\e\4\1\7\b\m\7\6\a\s\b\h\r\5\7\8\f\v\o\g\c\1\2\m\t\8\7\o\u\q\w\m\q\t\v\9\4\0\0\9\p\7\z\c\9\3\2\5\s\1\w\1\f\q\8\g\0\8\h\3\2\2\e\e\2\s\n\y\b\1\n\7\0\a\5\i\y\1\o\v\6\4\p\8\x\l\m\a\n\g\2\g\i\s\p\j\y\j\c\s\h\u\5\8\6\i\6\k\0\s\c\d\d\0\1\1\q\m\u\k\h\u\n\i\4\t\h\e\a\c\3\4\o\7\m\i\5\a\f\o\i\h\k\f\z\p\j\l\f\x\v\1\u\6\k\j\x\8\u\6\s\7\k\g\9\k\t\b\6\r\s\z\j\n\z\w\u\b\c\c\6\6\l\k\8\7\k\a\2\r\y\b\p\e\d\m\2\n\m\p\m\u\j\4\n\6\3\i\z\j\z\v\5\2\o\l\h\b\s\v\r\v\e\z\q\w\9\x\g\q\6\n\5\8\m\d\q\e\m\8\o\h\q\c\c\0\o\m\j\m\5\l\s\l\i\c\q\j\6\u\q\t\2\m\u\w\o\t\3\g\a\p\g\y\6\e\c\f\7\d\v\5\w\c\1\f\o\b\p\8\5\v\z\c\c\5\6\7\l\l\5\0\r\e\k\7\q\l\y\u\5\p\m\t\y\5\f\i\3\o\2\3\d\s\3\i\i\m\e\z\a\b\n\6\0\b\l\9\7\s\m\b\4\j\z\b\0\5\l\9\t\8\d\w\k\z\g\5\3\f\r\i\z\t\s\z\p\f\t\d\p\j\g\u\t\v\h\r\k\4\d\i\z\j\b\8\k\5\a\7\o\4\k\n\1\y\o\h\i\o\q\4\q\n\k\l\9\z\s\8\a\n\z\e\5\j\b\9\i\b\0\4\4\7\s\t\l\a\5\l\o\u\w\a\p\8\3\6\f\t\g\o\q\z\y\v\u\p\q\3\b\w\j\g\0\k\y\n\2\z\k\4\a\p\0\7\i\p\i\m\b\s\t\5\a\4\3\y\c\7\a\h\7\l\j\q\a\j\v\2\s\t\w\l\o\i\2\m\4\h\6\q\j\j\8\5\l\5\a\j\a\w\r\a\o\w\0\3\g\y\y\0\v\l\v\c\k\2\3\g\a\t\6\9\1\b\4\1\w\1\q\1\x\z\a\7\t\w\i\h\o\h\f\w\x\1\7\a\g\w\0\7\v\f\b\k\k\0\c\f\f\w\l\v\w\g\3\q\1\b\u\e\i\m\z\h\7\v\9\j\3\0\n\2\c\l\3\q\4\k\r\6\5\5\9\z\y\i\f\8\t\k\5\h\g\k\a\d\0\k\m\u\i\r\m\v\7\7\i\k\z\u\x\x\2\w\m\n\c\1\q\a\b\8\q\w\w\w\v\s\w\z\1\e\w\g\3\i\6\s\n\2\4\h\v\m\l\5\h\a\y\s\w\4\m\j\p\6\j\w\g\g\n\s\2\9\q\e\n\c\4\d\7\e\o\u\8\f\y\j\g\c\j\x\d\v\5\y\4\1\2\1\y\x\l\z\m\l\k\4\c\x\3\g\n\l\n\o\a\g\p\a\8\n\z\z\s\x\9\5\t\o\x\j\4\o\7\z\a\m\o\w\y\e\j\j\7\u\3\y\6\r\z\8\w\e\b\h\x\q\n\e\h\z\l\l\j\h\a\5\7\v\0\t\v\u\g\y\m\2\n\d\r\d\l\z\i\q\c\m\v\q\p\1\z\7\1\s\9\e\q\v\t\i\u\u\u\h\u\c\x\v\m\b\g\m\m\p\t\4\g\y\o\2\c\x\n\i\s\w\9\d\1\b\l\q\3\g\2\m\2\o\3\e\q\f\y\x\t\0\5\r\2\0\q\3\m\p\9\t\u\j\r\n\p\x\b\9\y\s\9\3\w\s\r\t\d\p\b\v\s\n\4\f\v\j\s\9\l\q\g\y\6\g\k\o\p\s\n\b\o\d\0\i\y\x\u\9\c\k\k\a\y\c\3\y\4\l\b\p\r\r\k\1\d\3\a\x\x\i\k\t\6\1\2\r\g\g\8\w\k\k\j\5\4\v\r\1\k\t\d\7\d\2\p\q\y\m\9\w\7\h\f\5\j\a\9\c\s\s\g\k\o\g\w\f\p\z\d\r\2\3\d\0\e\h\h\g\b\h\g\t\u\n\r\p\e\7\w\j\y\5\k\x\7\5\1\1\z\r\m\y\3\k\0\7\z\i\e\6\t\2\m\b\3\9\d\f\4\q\e\2\f\a\1\r\t\d\5\s\8\1\g\i\q\0\i\e\z\c\c\u\4\a\2\z\2\u\a\9\k\3\a\j\8\h\8\q\c\a\l\v\q\0\z\i\u\x\7\4\v\9\w\t\7\0\3\q\d\d\n\h\o\e\n\b\v\5\g\3\g\z\w\i\g\u\k\l\5\5\x\f\r\u\h\4\6\y\y\o\n\s\2\l\q\6\2\i\7\3\s\x\c\4\u\e\q\v\j\z\m\z\4\d\d\s\0\v\l\k\w\x\z\l\v\h\0\u\c\f\o\2\6\2\q\l\7\w\f\b\v\x\y\6\q\7\r\g\m\k\c\i\k\h\4\r\9\4\5\b\0\d\l\f\k\b\1\j\t\i\b\r\y\f\m\9\7\c\7\q\c\u\j\0\o\u\v\o\3\0\g\1\o\9\z\h\w\3\p\2\b\p\a\p\t\7\d\u\h\z\7\y\o\z\7\x\k\5\f\5\l\8\0\g\b\8\4\p\3\9\s\4\3\1\f\4\c\e\4\m\8\e\x\g\4\t\n\4\2\x\e\6\k\j\y\k\c\2\n\i\i\7\p\l\e\u\6\o\d\j\a\m\f\6\s\8\0\h\x\m\f\s\n\9\b\v\x\f\2\e\4\b\p\5\p\1\x\1\u\d\1\u\e\0\u\r\v\c\s\5\2\2\u\1\b\g\f\1\1\x\8\g\d\5\v\z\7\z\s\r\5\8\p\a\a\y\x\9\3\f\5\z\e\w\3\i\2\q\b\m\u\i\7\3\o\e\h\l\5\m\s\o\v\q\9\l\g\t\q\8\c\i\i\1\r\f\n\6\g\2\w\4\0\k\k\k\1\j\5\l\d\6\m\d\0\b\u\y\i\w\3\q\j\k\x\i\s\9\w\u\4\0\x\a\8\a\d\h\3\f\o\f\s\h\0\k\z\s\t\y\b\g\k\v\x\s\g\t\o\r\v\5\4\n\5\0\2\n\f\0\x\g\x\s\k\x\h\y\8\t\n\d\1\5\8\3\u\i\6\j\2\1\l\z\t\t\x\w\u\u\0\b\8\x\v\o\n\n\y\p\9\6\d\9\w\4\x\z\d\y\s\l\u\u\a\y\b\o\g\h\4\0\4\q\g\r\a\l\1\m\s\v\a\7\x\h\5\r\k\x\e\7\6\s\1\u\4\7\3\h\a\i\e\s\n\4\i\n\v\e\y\u\m\0\b\z\g\g\j\4\w\6\d\1\b\t\9\u\5\1\l\q\i\c\3\x\y\d\s\8\t\y\j\c\g\m\9\o\k\n\2\o\d\r\1\2\p\o\c\b\8\c\w\e\g\k\r\w\g\5\w\h\g\g\s\n\w\3\j\q\g\u\9\n\t\j\9\9\s\7\1\y\7\c\9\z\9\i\j\4\5\c\5\n\y\w\j\z\7\k\s\c\d\o\g\r\w\r\t\c\p\q\o\r\c\v\0\k\3\1\6\a\s\y\3\i\u\v\6\2\0\z\t\0\j\0\o\o\s\h\u\g\u\e\n\i\1\x\d\o\4\x\d\m\j\n\p\3\z\f\2\r\o\g\0\i\t\s\7\2\m\0\d\d\k\i\0\z\f\6\8\e\e\s\d\1\k\9\l\6\v\z\6\p\l\1\u\z\n\5\d\k\7\m\d\j\j\8\s\7\c\o\9\h\j\e\j\i\6\p\z\v\c\v\d\n\f\d\c\0\p\v\n\v\6\y\0\l\o\m\b\r\m\x\5\f\3\f\e\u\n\u\i\e\g\b\p\p\i\r\y\b\8\w\u\f\q\l\c\e\y\a\c\p\x\a\a\0\7\d\q\9\9\c\1\h\w\k\j\z\g\2\k\z\h\n\g\j\6\a\h\4\3\j\3\o\3\b\b\g\m\b\n\x\d\p\l\2\h\o\w\2\c\r\d\3\l\v\d\k\i\k\0\3\g\0\z\x\c\9\g\6\l\l\x\z\5\3\g\k\c\p\2\y\1\o\8\5\1\e\2\b\5\z\6\6\o\3\n\8\m\m\z\i\8\x\7\a\z\i\e\b\4\l\k\l\p\7\t\a\b\m\e\x\7\x\6\n\p\r\9\m\g\9\y\1\o\9\f\9\6\6\u\o\n\m\4\6\2\0\o\6\z\h\l\c\y\8\x\5\u\9\i\h\t\9\a\o\d\l\3\t\p\p\h\w\q\c\c\5\k\2\r\i\k\p\1\n\6\p\i\n\x\e\u\x\n\1\a\a\p\r\l\x\l\g\l\5\i\k\f\u\u\k\k\f\l\1\b\9\c\m\i\k\a\7\d\4\4\8\6\c\y\5\9\7\p\y\6\o\q\1\u\z\l\o\x\c\o\2\7\z\v\a\o\a\g\y\z\h\q\8\i\a\6\8\6\g\a\c\s\j\h\h\l\8\q\x\4\1\2\y\x\q\7\v\0\v\a\w\5\x\3\b\z\d\0\f\i\v\3\r\x\n\r\j\o\y\0\t\9\8\b\e\i\w\8\y\j\1\0\0\m\m\6\x\d\e\b\t\k\l\p\7\m\x\w\x\l\t\5\p\2\t\h\6\e\7\p\3\a\d\y\h\i\b\t\t\z\z\u\9\w\2\z\1\9\l\r\s\b\h\y\w\1\o\f\3\i\m\8\3\n\r\6\y\b\9\s\t\k\f\q\6\8\m\p\x\m\5\a\g\z\q\0\s\n\k\5\2\g\7\1\y\d\1\n\q\o\h\4\r\a\k\8\b\e\3\2\t\7\s\q\q\h\8\z\6\s\g\x\n\o\z\j\j\d\d\u\x\a\p\m\z\b\3\t\s\e\8\h\w\d\g\j\k\3\p\n\w\i\u\8\k\q\t\t\z\1\v\2\3\y\t\a\i\0\8\7\b\5\8\w\4\5\8\a\z\m\6\o\c\3\4\q\5\o\2\k\i\o\8\0\a\f\k\g\2\o\r\s\w\w\k\m\8\0\t\1\5\u\8\e\7\r\p\9\1\s\s\2\v\x\x\m\l\0\d\n\8\7\6\y\7\a\7\u\u\h\s\i\f\o\n\0\6\j\4\v\q\j\c\i\7\y\0\h\v\y\7\r\w\p\q\1\4\9\j\0\y\5\1\t\z\9\f\8\3\m\2\a\i\r\n\h\r\r\w\9\g\p\q\l\t\h\n\x\s\f\7\d\7\b\j\3\j\e\3\t\d\f\5\o\8\e\p\y\x\c\o\8\s\y\e\d\j\c\a\8\j\5\u\h\y\z\j\2\z\h\h\w\6\w\y\6\1\a\y\7\5\g\n\2\t\y\7\u\t\2\j\8\x\l\o\w\0\1\1\p\n\y\g\l\2\o\z\n\3\m\g\9\g\t\8\q\9\k\l\r\b\z\v\c\j\a\3\y\w\7\t\u\f\z\6\4\y\h\0\t\i\3\h\v\3\8\3\b\i\r\n\7\o\f\9\j\s\u\d\8\l\f\d\m\q\z\1\s\5\5\0\e\t\z\i\i\a\t\w\a\i\y\u\q\6\2\u\h\a\1\2\b\1\v\d\r\1\9\v\y\z\t\v\v\m\p\g\z\o\i\x\y\y\v\5\f\r\j\z\q\u\d\g\z\i\o\3\9\o\r\7\7\b\7\q\p\9\r\h\r\t\j\m\j\b\e\8\d\g\k\v\w\n\s\6\7\s\w\6\0\u\x\2\e\z\r\q\e\9\r\5\r\0\u\6\o\j\v\u\2\y\i\i\j\v\l\n\y\y\z\l\h\o\a\3\f\u\3\t\1\t\k\1\k\p\n\m\g\u\1\7\s\t\x\d\t\9\n\5\e\6\2\v\7\e\2\y\v\b\t\p\l\v\a\z\7\h\5\b\d\3\m\v\5\h\i\w\t\n\0\4\q\2\2\5\l\e\a\1\q\m\7\y\e\k\7\4\j\3\x\c\8\6\9\x\a\f\0\c\c\f\z\l\i\q\s\5\p\f\w\b\p\n\2\o\z\b\g\c\5\v\k\l\i\j\5\c\r\3\5\f\r\h\j\9\0\q\0\0\u\9\x\6\9\y\v\w\7\j\1\d\o\m\c\n\h\i\m\m\p\e\i\7\y\6\j\b\b\9\e\x\w\q\b\d\b\h\f\f\y\n\v\c\q\l\5\9\n\z\l\p\8\z\8\y\q\k\8\d\a\g\l\n\2\x\7\v\f\3\5\8\s\y\z\c\v\o\v\m\w\f\n\b\w\9\m\x\z\0\a\7\i\8\h\j\u\z\a ]] 00:07:53.932 00:07:53.932 real 0m1.054s 00:07:53.932 user 0m0.709s 00:07:53.932 sys 0m0.424s 00:07:53.932 ************************************ 00:07:53.932 END TEST dd_rw_offset 00:07:53.932 ************************************ 00:07:53.932 06:00:19 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:53.932 06:00:19 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:07:53.932 06:00:19 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@1 -- # cleanup 00:07:53.932 06:00:19 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@76 -- # clear_nvme Nvme0n1 00:07:53.932 06:00:19 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:53.932 06:00:19 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@11 -- # local nvme_ref= 00:07:53.932 06:00:19 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@12 -- # local size=0xffff 00:07:53.932 06:00:19 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@14 -- # local bs=1048576 00:07:53.932 06:00:19 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@15 -- # local count=1 00:07:53.932 06:00:19 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:53.932 06:00:19 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # gen_conf 00:07:53.932 06:00:19 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:53.932 06:00:19 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:07:54.192 [2024-10-01 06:00:19.572578] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:07:54.192 [2024-10-01 06:00:19.572675] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72138 ] 00:07:54.192 { 00:07:54.192 "subsystems": [ 00:07:54.192 { 00:07:54.192 "subsystem": "bdev", 00:07:54.192 "config": [ 00:07:54.192 { 00:07:54.192 "params": { 00:07:54.192 "trtype": "pcie", 00:07:54.192 "traddr": "0000:00:10.0", 00:07:54.192 "name": "Nvme0" 00:07:54.192 }, 00:07:54.192 "method": "bdev_nvme_attach_controller" 00:07:54.192 }, 00:07:54.192 { 00:07:54.192 "method": "bdev_wait_for_examine" 00:07:54.192 } 00:07:54.192 ] 00:07:54.192 } 00:07:54.192 ] 00:07:54.192 } 00:07:54.192 [2024-10-01 06:00:19.709609] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:54.192 [2024-10-01 06:00:19.744731] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:54.192 [2024-10-01 06:00:19.773612] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:54.450  Copying: 1024/1024 [kB] (average 1000 MBps) 00:07:54.450 00:07:54.450 06:00:19 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@77 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:54.450 00:07:54.450 real 0m14.643s 00:07:54.450 user 0m10.475s 00:07:54.450 sys 0m4.740s 00:07:54.450 ************************************ 00:07:54.450 END TEST spdk_dd_basic_rw 00:07:54.450 ************************************ 00:07:54.450 06:00:19 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:54.450 06:00:19 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:07:54.450 06:00:20 spdk_dd -- dd/dd.sh@21 -- # run_test spdk_dd_posix /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:07:54.450 06:00:20 spdk_dd -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:54.450 06:00:20 spdk_dd -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:54.450 06:00:20 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:07:54.450 ************************************ 00:07:54.450 START TEST spdk_dd_posix 00:07:54.450 ************************************ 00:07:54.450 06:00:20 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:07:54.709 * Looking for test storage... 00:07:54.709 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:54.709 06:00:20 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:54.709 06:00:20 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1681 -- # lcov --version 00:07:54.709 06:00:20 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:54.709 06:00:20 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:54.709 06:00:20 spdk_dd.spdk_dd_posix -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:54.709 06:00:20 spdk_dd.spdk_dd_posix -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:54.709 06:00:20 spdk_dd.spdk_dd_posix -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:54.709 06:00:20 spdk_dd.spdk_dd_posix -- scripts/common.sh@336 -- # IFS=.-: 00:07:54.709 06:00:20 spdk_dd.spdk_dd_posix -- scripts/common.sh@336 -- # read -ra ver1 00:07:54.709 06:00:20 spdk_dd.spdk_dd_posix -- scripts/common.sh@337 -- # IFS=.-: 00:07:54.709 06:00:20 spdk_dd.spdk_dd_posix -- scripts/common.sh@337 -- # read -ra ver2 00:07:54.709 06:00:20 spdk_dd.spdk_dd_posix -- scripts/common.sh@338 -- # local 'op=<' 00:07:54.709 06:00:20 spdk_dd.spdk_dd_posix -- scripts/common.sh@340 -- # ver1_l=2 00:07:54.709 06:00:20 spdk_dd.spdk_dd_posix -- scripts/common.sh@341 -- # ver2_l=1 00:07:54.709 06:00:20 spdk_dd.spdk_dd_posix -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:54.709 06:00:20 spdk_dd.spdk_dd_posix -- scripts/common.sh@344 -- # case "$op" in 00:07:54.709 06:00:20 spdk_dd.spdk_dd_posix -- scripts/common.sh@345 -- # : 1 00:07:54.709 06:00:20 spdk_dd.spdk_dd_posix -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:54.709 06:00:20 spdk_dd.spdk_dd_posix -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:54.709 06:00:20 spdk_dd.spdk_dd_posix -- scripts/common.sh@365 -- # decimal 1 00:07:54.709 06:00:20 spdk_dd.spdk_dd_posix -- scripts/common.sh@353 -- # local d=1 00:07:54.709 06:00:20 spdk_dd.spdk_dd_posix -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:54.709 06:00:20 spdk_dd.spdk_dd_posix -- scripts/common.sh@355 -- # echo 1 00:07:54.709 06:00:20 spdk_dd.spdk_dd_posix -- scripts/common.sh@365 -- # ver1[v]=1 00:07:54.709 06:00:20 spdk_dd.spdk_dd_posix -- scripts/common.sh@366 -- # decimal 2 00:07:54.709 06:00:20 spdk_dd.spdk_dd_posix -- scripts/common.sh@353 -- # local d=2 00:07:54.709 06:00:20 spdk_dd.spdk_dd_posix -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:54.709 06:00:20 spdk_dd.spdk_dd_posix -- scripts/common.sh@355 -- # echo 2 00:07:54.709 06:00:20 spdk_dd.spdk_dd_posix -- scripts/common.sh@366 -- # ver2[v]=2 00:07:54.709 06:00:20 spdk_dd.spdk_dd_posix -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:54.709 06:00:20 spdk_dd.spdk_dd_posix -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:54.709 06:00:20 spdk_dd.spdk_dd_posix -- scripts/common.sh@368 -- # return 0 00:07:54.709 06:00:20 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:54.709 06:00:20 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:54.709 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:54.709 --rc genhtml_branch_coverage=1 00:07:54.709 --rc genhtml_function_coverage=1 00:07:54.709 --rc genhtml_legend=1 00:07:54.709 --rc geninfo_all_blocks=1 00:07:54.709 --rc geninfo_unexecuted_blocks=1 00:07:54.709 00:07:54.709 ' 00:07:54.709 06:00:20 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:54.709 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:54.709 --rc genhtml_branch_coverage=1 00:07:54.709 --rc genhtml_function_coverage=1 00:07:54.709 --rc genhtml_legend=1 00:07:54.709 --rc geninfo_all_blocks=1 00:07:54.709 --rc geninfo_unexecuted_blocks=1 00:07:54.709 00:07:54.709 ' 00:07:54.709 06:00:20 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:54.709 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:54.709 --rc genhtml_branch_coverage=1 00:07:54.709 --rc genhtml_function_coverage=1 00:07:54.710 --rc genhtml_legend=1 00:07:54.710 --rc geninfo_all_blocks=1 00:07:54.710 --rc geninfo_unexecuted_blocks=1 00:07:54.710 00:07:54.710 ' 00:07:54.710 06:00:20 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:54.710 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:54.710 --rc genhtml_branch_coverage=1 00:07:54.710 --rc genhtml_function_coverage=1 00:07:54.710 --rc genhtml_legend=1 00:07:54.710 --rc geninfo_all_blocks=1 00:07:54.710 --rc geninfo_unexecuted_blocks=1 00:07:54.710 00:07:54.710 ' 00:07:54.710 06:00:20 spdk_dd.spdk_dd_posix -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:54.710 06:00:20 spdk_dd.spdk_dd_posix -- scripts/common.sh@15 -- # shopt -s extglob 00:07:54.710 06:00:20 spdk_dd.spdk_dd_posix -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:54.710 06:00:20 spdk_dd.spdk_dd_posix -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:54.710 06:00:20 spdk_dd.spdk_dd_posix -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:54.710 06:00:20 spdk_dd.spdk_dd_posix -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:54.710 06:00:20 spdk_dd.spdk_dd_posix -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:54.710 06:00:20 spdk_dd.spdk_dd_posix -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:54.710 06:00:20 spdk_dd.spdk_dd_posix -- paths/export.sh@5 -- # export PATH 00:07:54.710 06:00:20 spdk_dd.spdk_dd_posix -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:54.710 06:00:20 spdk_dd.spdk_dd_posix -- dd/posix.sh@121 -- # msg[0]=', using AIO' 00:07:54.710 06:00:20 spdk_dd.spdk_dd_posix -- dd/posix.sh@122 -- # msg[1]=', liburing in use' 00:07:54.710 06:00:20 spdk_dd.spdk_dd_posix -- dd/posix.sh@123 -- # msg[2]=', disabling liburing, forcing AIO' 00:07:54.710 06:00:20 spdk_dd.spdk_dd_posix -- dd/posix.sh@125 -- # trap cleanup EXIT 00:07:54.710 06:00:20 spdk_dd.spdk_dd_posix -- dd/posix.sh@127 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:54.710 06:00:20 spdk_dd.spdk_dd_posix -- dd/posix.sh@128 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:54.710 06:00:20 spdk_dd.spdk_dd_posix -- dd/posix.sh@130 -- # tests 00:07:54.710 06:00:20 spdk_dd.spdk_dd_posix -- dd/posix.sh@99 -- # printf '* First test run%s\n' ', liburing in use' 00:07:54.710 * First test run, liburing in use 00:07:54.710 06:00:20 spdk_dd.spdk_dd_posix -- dd/posix.sh@102 -- # run_test dd_flag_append append 00:07:54.710 06:00:20 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:54.710 06:00:20 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:54.710 06:00:20 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:54.710 ************************************ 00:07:54.710 START TEST dd_flag_append 00:07:54.710 ************************************ 00:07:54.710 06:00:20 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1125 -- # append 00:07:54.710 06:00:20 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@16 -- # local dump0 00:07:54.710 06:00:20 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@17 -- # local dump1 00:07:54.710 06:00:20 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # gen_bytes 32 00:07:54.710 06:00:20 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:07:54.710 06:00:20 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:07:54.710 06:00:20 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # dump0=hc5jhhqaimzux4brackcpnpvtovns0fn 00:07:54.710 06:00:20 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # gen_bytes 32 00:07:54.710 06:00:20 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:07:54.710 06:00:20 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:07:54.710 06:00:20 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # dump1=hrpkqzv1ukd8bw6qscd3aulfhqy22r5c 00:07:54.710 06:00:20 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@22 -- # printf %s hc5jhhqaimzux4brackcpnpvtovns0fn 00:07:54.710 06:00:20 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@23 -- # printf %s hrpkqzv1ukd8bw6qscd3aulfhqy22r5c 00:07:54.710 06:00:20 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:07:54.710 [2024-10-01 06:00:20.323038] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:07:54.710 [2024-10-01 06:00:20.323154] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72206 ] 00:07:54.968 [2024-10-01 06:00:20.458469] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:54.969 [2024-10-01 06:00:20.492018] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:54.969 [2024-10-01 06:00:20.519817] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:55.226  Copying: 32/32 [B] (average 31 kBps) 00:07:55.226 00:07:55.226 06:00:20 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@27 -- # [[ hrpkqzv1ukd8bw6qscd3aulfhqy22r5chc5jhhqaimzux4brackcpnpvtovns0fn == \h\r\p\k\q\z\v\1\u\k\d\8\b\w\6\q\s\c\d\3\a\u\l\f\h\q\y\2\2\r\5\c\h\c\5\j\h\h\q\a\i\m\z\u\x\4\b\r\a\c\k\c\p\n\p\v\t\o\v\n\s\0\f\n ]] 00:07:55.226 00:07:55.226 real 0m0.407s 00:07:55.226 user 0m0.205s 00:07:55.226 sys 0m0.172s 00:07:55.226 06:00:20 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:55.226 06:00:20 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:07:55.226 ************************************ 00:07:55.226 END TEST dd_flag_append 00:07:55.226 ************************************ 00:07:55.226 06:00:20 spdk_dd.spdk_dd_posix -- dd/posix.sh@103 -- # run_test dd_flag_directory directory 00:07:55.226 06:00:20 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:55.226 06:00:20 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:55.226 06:00:20 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:55.226 ************************************ 00:07:55.226 START TEST dd_flag_directory 00:07:55.226 ************************************ 00:07:55.226 06:00:20 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1125 -- # directory 00:07:55.226 06:00:20 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:55.226 06:00:20 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@650 -- # local es=0 00:07:55.226 06:00:20 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:55.226 06:00:20 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:55.226 06:00:20 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:55.226 06:00:20 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:55.226 06:00:20 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:55.226 06:00:20 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:55.226 06:00:20 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:55.226 06:00:20 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:55.226 06:00:20 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:55.226 06:00:20 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:55.226 [2024-10-01 06:00:20.769088] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:07:55.226 [2024-10-01 06:00:20.769176] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72234 ] 00:07:55.485 [2024-10-01 06:00:20.899638] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:55.485 [2024-10-01 06:00:20.937617] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:55.485 [2024-10-01 06:00:20.965591] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:55.485 [2024-10-01 06:00:20.980628] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:55.485 [2024-10-01 06:00:20.980700] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:55.485 [2024-10-01 06:00:20.980728] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:55.485 [2024-10-01 06:00:21.039083] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:55.485 06:00:21 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@653 -- # es=236 00:07:55.485 06:00:21 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:55.485 06:00:21 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@662 -- # es=108 00:07:55.485 06:00:21 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@663 -- # case "$es" in 00:07:55.485 06:00:21 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@670 -- # es=1 00:07:55.485 06:00:21 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:55.485 06:00:21 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:07:55.485 06:00:21 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@650 -- # local es=0 00:07:55.485 06:00:21 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:07:55.744 06:00:21 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:55.744 06:00:21 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:55.744 06:00:21 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:55.744 06:00:21 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:55.744 06:00:21 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:55.744 06:00:21 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:55.744 06:00:21 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:55.744 06:00:21 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:55.744 06:00:21 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:07:55.744 [2024-10-01 06:00:21.160532] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:07:55.744 [2024-10-01 06:00:21.160642] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72244 ] 00:07:55.744 [2024-10-01 06:00:21.296710] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:55.744 [2024-10-01 06:00:21.331782] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:56.003 [2024-10-01 06:00:21.361841] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:56.003 [2024-10-01 06:00:21.377778] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:56.003 [2024-10-01 06:00:21.377845] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:56.003 [2024-10-01 06:00:21.377873] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:56.003 [2024-10-01 06:00:21.438550] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:56.003 06:00:21 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@653 -- # es=236 00:07:56.003 06:00:21 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:56.003 06:00:21 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@662 -- # es=108 00:07:56.003 06:00:21 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@663 -- # case "$es" in 00:07:56.003 06:00:21 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@670 -- # es=1 00:07:56.003 06:00:21 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:56.003 00:07:56.003 real 0m0.785s 00:07:56.003 user 0m0.373s 00:07:56.003 sys 0m0.204s 00:07:56.003 06:00:21 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:56.003 06:00:21 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@10 -- # set +x 00:07:56.003 ************************************ 00:07:56.003 END TEST dd_flag_directory 00:07:56.003 ************************************ 00:07:56.003 06:00:21 spdk_dd.spdk_dd_posix -- dd/posix.sh@104 -- # run_test dd_flag_nofollow nofollow 00:07:56.003 06:00:21 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:56.003 06:00:21 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:56.003 06:00:21 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:56.003 ************************************ 00:07:56.003 START TEST dd_flag_nofollow 00:07:56.003 ************************************ 00:07:56.003 06:00:21 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1125 -- # nofollow 00:07:56.003 06:00:21 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:07:56.003 06:00:21 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:07:56.003 06:00:21 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:07:56.003 06:00:21 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:07:56.003 06:00:21 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:56.003 06:00:21 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@650 -- # local es=0 00:07:56.003 06:00:21 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:56.003 06:00:21 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:56.003 06:00:21 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:56.003 06:00:21 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:56.003 06:00:21 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:56.003 06:00:21 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:56.003 06:00:21 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:56.003 06:00:21 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:56.003 06:00:21 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:56.003 06:00:21 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:56.262 [2024-10-01 06:00:21.620768] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:07:56.262 [2024-10-01 06:00:21.621334] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72267 ] 00:07:56.262 [2024-10-01 06:00:21.757029] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:56.262 [2024-10-01 06:00:21.795290] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:56.262 [2024-10-01 06:00:21.824939] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:56.262 [2024-10-01 06:00:21.842795] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:07:56.262 [2024-10-01 06:00:21.842858] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:07:56.262 [2024-10-01 06:00:21.842873] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:56.521 [2024-10-01 06:00:21.905217] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:56.521 06:00:21 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@653 -- # es=216 00:07:56.521 06:00:21 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:56.521 06:00:21 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@662 -- # es=88 00:07:56.521 06:00:21 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@663 -- # case "$es" in 00:07:56.521 06:00:21 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@670 -- # es=1 00:07:56.521 06:00:21 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:56.521 06:00:21 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:07:56.521 06:00:21 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@650 -- # local es=0 00:07:56.521 06:00:21 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:07:56.521 06:00:21 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:56.521 06:00:21 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:56.521 06:00:21 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:56.521 06:00:21 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:56.521 06:00:21 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:56.521 06:00:21 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:56.521 06:00:21 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:56.521 06:00:21 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:56.521 06:00:21 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:07:56.521 [2024-10-01 06:00:22.026821] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:07:56.521 [2024-10-01 06:00:22.026939] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72282 ] 00:07:56.780 [2024-10-01 06:00:22.163107] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:56.780 [2024-10-01 06:00:22.195499] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:56.780 [2024-10-01 06:00:22.223077] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:56.780 [2024-10-01 06:00:22.238179] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:07:56.780 [2024-10-01 06:00:22.238246] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:07:56.780 [2024-10-01 06:00:22.238260] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:56.780 [2024-10-01 06:00:22.300419] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:56.780 06:00:22 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@653 -- # es=216 00:07:56.780 06:00:22 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:56.780 06:00:22 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@662 -- # es=88 00:07:56.780 06:00:22 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@663 -- # case "$es" in 00:07:56.780 06:00:22 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@670 -- # es=1 00:07:56.780 06:00:22 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:56.780 06:00:22 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@46 -- # gen_bytes 512 00:07:56.780 06:00:22 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/common.sh@98 -- # xtrace_disable 00:07:56.780 06:00:22 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:07:56.780 06:00:22 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:57.038 [2024-10-01 06:00:22.427687] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:07:57.038 [2024-10-01 06:00:22.427782] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72284 ] 00:07:57.038 [2024-10-01 06:00:22.562477] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:57.038 [2024-10-01 06:00:22.596659] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:57.038 [2024-10-01 06:00:22.624338] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:57.296  Copying: 512/512 [B] (average 500 kBps) 00:07:57.297 00:07:57.297 06:00:22 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@49 -- # [[ fqxb272vhtjlog4oz2wdwj7ue3ujuylgauzrve7wi9ppq4cyt2gn8fnfdszypj8g6rieys8rdwt6y6iofdkntxknw56cugzmr3h8xcepsd2984gcygbhncxahqxjkvm47945vztmn08majkqk52nlu76inb2z2ld33ey0riszxlwwig99ffsncai8i9brco2zj4c5uk4pw2iv2yv4bi6ktxp8kcsum5y5kkix7ysytvbyaldcflwem4n6xrrfqrx1gbkv41ypvv78d7fozcpzm10uk2sw34fekegro763baozgrrsib4hril71wxkfa2v0qykugljvyutn4b5880ngrlx2kizwco6k8ayo8bkct45x7womuk7t84k34dhll2otdzkx8yq398sj812wsgs4v0a1ly4kmy9w9qzxd5ygciou886jq3l354q98ninyzg4tmjydy3t8ho8n0eiceonie9474yxkgic3ibi4cwj053zfe83m6yen6qb08uxpz == \f\q\x\b\2\7\2\v\h\t\j\l\o\g\4\o\z\2\w\d\w\j\7\u\e\3\u\j\u\y\l\g\a\u\z\r\v\e\7\w\i\9\p\p\q\4\c\y\t\2\g\n\8\f\n\f\d\s\z\y\p\j\8\g\6\r\i\e\y\s\8\r\d\w\t\6\y\6\i\o\f\d\k\n\t\x\k\n\w\5\6\c\u\g\z\m\r\3\h\8\x\c\e\p\s\d\2\9\8\4\g\c\y\g\b\h\n\c\x\a\h\q\x\j\k\v\m\4\7\9\4\5\v\z\t\m\n\0\8\m\a\j\k\q\k\5\2\n\l\u\7\6\i\n\b\2\z\2\l\d\3\3\e\y\0\r\i\s\z\x\l\w\w\i\g\9\9\f\f\s\n\c\a\i\8\i\9\b\r\c\o\2\z\j\4\c\5\u\k\4\p\w\2\i\v\2\y\v\4\b\i\6\k\t\x\p\8\k\c\s\u\m\5\y\5\k\k\i\x\7\y\s\y\t\v\b\y\a\l\d\c\f\l\w\e\m\4\n\6\x\r\r\f\q\r\x\1\g\b\k\v\4\1\y\p\v\v\7\8\d\7\f\o\z\c\p\z\m\1\0\u\k\2\s\w\3\4\f\e\k\e\g\r\o\7\6\3\b\a\o\z\g\r\r\s\i\b\4\h\r\i\l\7\1\w\x\k\f\a\2\v\0\q\y\k\u\g\l\j\v\y\u\t\n\4\b\5\8\8\0\n\g\r\l\x\2\k\i\z\w\c\o\6\k\8\a\y\o\8\b\k\c\t\4\5\x\7\w\o\m\u\k\7\t\8\4\k\3\4\d\h\l\l\2\o\t\d\z\k\x\8\y\q\3\9\8\s\j\8\1\2\w\s\g\s\4\v\0\a\1\l\y\4\k\m\y\9\w\9\q\z\x\d\5\y\g\c\i\o\u\8\8\6\j\q\3\l\3\5\4\q\9\8\n\i\n\y\z\g\4\t\m\j\y\d\y\3\t\8\h\o\8\n\0\e\i\c\e\o\n\i\e\9\4\7\4\y\x\k\g\i\c\3\i\b\i\4\c\w\j\0\5\3\z\f\e\8\3\m\6\y\e\n\6\q\b\0\8\u\x\p\z ]] 00:07:57.297 00:07:57.297 real 0m1.225s 00:07:57.297 user 0m0.602s 00:07:57.297 sys 0m0.385s 00:07:57.297 06:00:22 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:57.297 06:00:22 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:07:57.297 ************************************ 00:07:57.297 END TEST dd_flag_nofollow 00:07:57.297 ************************************ 00:07:57.297 06:00:22 spdk_dd.spdk_dd_posix -- dd/posix.sh@105 -- # run_test dd_flag_noatime noatime 00:07:57.297 06:00:22 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:57.297 06:00:22 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:57.297 06:00:22 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:57.297 ************************************ 00:07:57.297 START TEST dd_flag_noatime 00:07:57.297 ************************************ 00:07:57.297 06:00:22 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1125 -- # noatime 00:07:57.297 06:00:22 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@53 -- # local atime_if 00:07:57.297 06:00:22 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@54 -- # local atime_of 00:07:57.297 06:00:22 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@58 -- # gen_bytes 512 00:07:57.297 06:00:22 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/common.sh@98 -- # xtrace_disable 00:07:57.297 06:00:22 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:07:57.297 06:00:22 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:57.297 06:00:22 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # atime_if=1727762422 00:07:57.297 06:00:22 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:57.297 06:00:22 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # atime_of=1727762422 00:07:57.297 06:00:22 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@66 -- # sleep 1 00:07:58.672 06:00:23 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:58.672 [2024-10-01 06:00:23.910978] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:07:58.672 [2024-10-01 06:00:23.911082] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72327 ] 00:07:58.672 [2024-10-01 06:00:24.050671] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:58.672 [2024-10-01 06:00:24.093056] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:58.672 [2024-10-01 06:00:24.126597] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:58.672  Copying: 512/512 [B] (average 500 kBps) 00:07:58.672 00:07:58.672 06:00:24 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:58.672 06:00:24 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # (( atime_if == 1727762422 )) 00:07:58.672 06:00:24 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:58.672 06:00:24 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # (( atime_of == 1727762422 )) 00:07:58.931 06:00:24 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:58.931 [2024-10-01 06:00:24.341308] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:07:58.931 [2024-10-01 06:00:24.341410] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72340 ] 00:07:58.931 [2024-10-01 06:00:24.475315] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:58.931 [2024-10-01 06:00:24.507961] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:58.931 [2024-10-01 06:00:24.538583] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:59.191  Copying: 512/512 [B] (average 500 kBps) 00:07:59.191 00:07:59.191 06:00:24 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:59.191 06:00:24 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # (( atime_if < 1727762424 )) 00:07:59.191 00:07:59.191 real 0m1.853s 00:07:59.191 user 0m0.419s 00:07:59.191 sys 0m0.371s 00:07:59.191 06:00:24 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:59.191 ************************************ 00:07:59.191 END TEST dd_flag_noatime 00:07:59.191 06:00:24 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:07:59.191 ************************************ 00:07:59.191 06:00:24 spdk_dd.spdk_dd_posix -- dd/posix.sh@106 -- # run_test dd_flags_misc io 00:07:59.191 06:00:24 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:59.191 06:00:24 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:59.191 06:00:24 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:07:59.191 ************************************ 00:07:59.191 START TEST dd_flags_misc 00:07:59.191 ************************************ 00:07:59.191 06:00:24 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1125 -- # io 00:07:59.191 06:00:24 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:07:59.191 06:00:24 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:07:59.191 06:00:24 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:07:59.191 06:00:24 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:07:59.191 06:00:24 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:07:59.191 06:00:24 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:07:59.191 06:00:24 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:07:59.191 06:00:24 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:59.191 06:00:24 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:07:59.191 [2024-10-01 06:00:24.796784] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:07:59.191 [2024-10-01 06:00:24.796916] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72364 ] 00:07:59.450 [2024-10-01 06:00:24.931762] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:59.450 [2024-10-01 06:00:24.967335] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:59.450 [2024-10-01 06:00:24.995371] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:59.708  Copying: 512/512 [B] (average 500 kBps) 00:07:59.708 00:07:59.708 06:00:25 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 4yu4127mgdkc74nzgp74exwfcorqjcl55jvbcifz8us5kkrnvum3ayfb9qlm2nakrhflj1srqfmt4cnp0pvjmnqb2vreua4rxpnys5s7rnucnooq9rji3o4q9xxz81u7xezn23a4io695wy65382wm2fg7dyun8zah99q3qqt0ynzfsrzy2mdyknlg7tvtexz5x3rfuofnw0cui0koxb3zyfwdh3o288gxms9fnqswxqwvpe2nqd0bk0wt3c1yn5j2libsgz1ke0zpvix2jvjqdkh3094m4nqumn89s3pzk71x7ej7wiw3g4049sdgw59qd4k6s8ur6ttjdq8rtl7liv66nxula9y74e0ahpd028vbhxb7s95w656n3p26d2tgda8vzl0q9qlkc584k5vmak42hnyyk86ag1aajj2rhyiw856rsvjnuoaebs6yjm6nnqgf2l64hdulbeisegpjyk0wvqo7v0rhafs1qf151fwad1hd6cn6g316g6qwz9 == \4\y\u\4\1\2\7\m\g\d\k\c\7\4\n\z\g\p\7\4\e\x\w\f\c\o\r\q\j\c\l\5\5\j\v\b\c\i\f\z\8\u\s\5\k\k\r\n\v\u\m\3\a\y\f\b\9\q\l\m\2\n\a\k\r\h\f\l\j\1\s\r\q\f\m\t\4\c\n\p\0\p\v\j\m\n\q\b\2\v\r\e\u\a\4\r\x\p\n\y\s\5\s\7\r\n\u\c\n\o\o\q\9\r\j\i\3\o\4\q\9\x\x\z\8\1\u\7\x\e\z\n\2\3\a\4\i\o\6\9\5\w\y\6\5\3\8\2\w\m\2\f\g\7\d\y\u\n\8\z\a\h\9\9\q\3\q\q\t\0\y\n\z\f\s\r\z\y\2\m\d\y\k\n\l\g\7\t\v\t\e\x\z\5\x\3\r\f\u\o\f\n\w\0\c\u\i\0\k\o\x\b\3\z\y\f\w\d\h\3\o\2\8\8\g\x\m\s\9\f\n\q\s\w\x\q\w\v\p\e\2\n\q\d\0\b\k\0\w\t\3\c\1\y\n\5\j\2\l\i\b\s\g\z\1\k\e\0\z\p\v\i\x\2\j\v\j\q\d\k\h\3\0\9\4\m\4\n\q\u\m\n\8\9\s\3\p\z\k\7\1\x\7\e\j\7\w\i\w\3\g\4\0\4\9\s\d\g\w\5\9\q\d\4\k\6\s\8\u\r\6\t\t\j\d\q\8\r\t\l\7\l\i\v\6\6\n\x\u\l\a\9\y\7\4\e\0\a\h\p\d\0\2\8\v\b\h\x\b\7\s\9\5\w\6\5\6\n\3\p\2\6\d\2\t\g\d\a\8\v\z\l\0\q\9\q\l\k\c\5\8\4\k\5\v\m\a\k\4\2\h\n\y\y\k\8\6\a\g\1\a\a\j\j\2\r\h\y\i\w\8\5\6\r\s\v\j\n\u\o\a\e\b\s\6\y\j\m\6\n\n\q\g\f\2\l\6\4\h\d\u\l\b\e\i\s\e\g\p\j\y\k\0\w\v\q\o\7\v\0\r\h\a\f\s\1\q\f\1\5\1\f\w\a\d\1\h\d\6\c\n\6\g\3\1\6\g\6\q\w\z\9 ]] 00:07:59.708 06:00:25 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:59.708 06:00:25 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:07:59.708 [2024-10-01 06:00:25.167033] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:07:59.709 [2024-10-01 06:00:25.167135] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72378 ] 00:07:59.709 [2024-10-01 06:00:25.289155] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:59.967 [2024-10-01 06:00:25.326713] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:59.967 [2024-10-01 06:00:25.354767] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:59.967  Copying: 512/512 [B] (average 500 kBps) 00:07:59.967 00:07:59.967 06:00:25 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 4yu4127mgdkc74nzgp74exwfcorqjcl55jvbcifz8us5kkrnvum3ayfb9qlm2nakrhflj1srqfmt4cnp0pvjmnqb2vreua4rxpnys5s7rnucnooq9rji3o4q9xxz81u7xezn23a4io695wy65382wm2fg7dyun8zah99q3qqt0ynzfsrzy2mdyknlg7tvtexz5x3rfuofnw0cui0koxb3zyfwdh3o288gxms9fnqswxqwvpe2nqd0bk0wt3c1yn5j2libsgz1ke0zpvix2jvjqdkh3094m4nqumn89s3pzk71x7ej7wiw3g4049sdgw59qd4k6s8ur6ttjdq8rtl7liv66nxula9y74e0ahpd028vbhxb7s95w656n3p26d2tgda8vzl0q9qlkc584k5vmak42hnyyk86ag1aajj2rhyiw856rsvjnuoaebs6yjm6nnqgf2l64hdulbeisegpjyk0wvqo7v0rhafs1qf151fwad1hd6cn6g316g6qwz9 == \4\y\u\4\1\2\7\m\g\d\k\c\7\4\n\z\g\p\7\4\e\x\w\f\c\o\r\q\j\c\l\5\5\j\v\b\c\i\f\z\8\u\s\5\k\k\r\n\v\u\m\3\a\y\f\b\9\q\l\m\2\n\a\k\r\h\f\l\j\1\s\r\q\f\m\t\4\c\n\p\0\p\v\j\m\n\q\b\2\v\r\e\u\a\4\r\x\p\n\y\s\5\s\7\r\n\u\c\n\o\o\q\9\r\j\i\3\o\4\q\9\x\x\z\8\1\u\7\x\e\z\n\2\3\a\4\i\o\6\9\5\w\y\6\5\3\8\2\w\m\2\f\g\7\d\y\u\n\8\z\a\h\9\9\q\3\q\q\t\0\y\n\z\f\s\r\z\y\2\m\d\y\k\n\l\g\7\t\v\t\e\x\z\5\x\3\r\f\u\o\f\n\w\0\c\u\i\0\k\o\x\b\3\z\y\f\w\d\h\3\o\2\8\8\g\x\m\s\9\f\n\q\s\w\x\q\w\v\p\e\2\n\q\d\0\b\k\0\w\t\3\c\1\y\n\5\j\2\l\i\b\s\g\z\1\k\e\0\z\p\v\i\x\2\j\v\j\q\d\k\h\3\0\9\4\m\4\n\q\u\m\n\8\9\s\3\p\z\k\7\1\x\7\e\j\7\w\i\w\3\g\4\0\4\9\s\d\g\w\5\9\q\d\4\k\6\s\8\u\r\6\t\t\j\d\q\8\r\t\l\7\l\i\v\6\6\n\x\u\l\a\9\y\7\4\e\0\a\h\p\d\0\2\8\v\b\h\x\b\7\s\9\5\w\6\5\6\n\3\p\2\6\d\2\t\g\d\a\8\v\z\l\0\q\9\q\l\k\c\5\8\4\k\5\v\m\a\k\4\2\h\n\y\y\k\8\6\a\g\1\a\a\j\j\2\r\h\y\i\w\8\5\6\r\s\v\j\n\u\o\a\e\b\s\6\y\j\m\6\n\n\q\g\f\2\l\6\4\h\d\u\l\b\e\i\s\e\g\p\j\y\k\0\w\v\q\o\7\v\0\r\h\a\f\s\1\q\f\1\5\1\f\w\a\d\1\h\d\6\c\n\6\g\3\1\6\g\6\q\w\z\9 ]] 00:07:59.967 06:00:25 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:07:59.967 06:00:25 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:07:59.967 [2024-10-01 06:00:25.560741] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:07:59.967 [2024-10-01 06:00:25.560834] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72382 ] 00:08:00.226 [2024-10-01 06:00:25.688151] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:00.226 [2024-10-01 06:00:25.720203] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:00.226 [2024-10-01 06:00:25.747393] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:00.484  Copying: 512/512 [B] (average 125 kBps) 00:08:00.484 00:08:00.484 06:00:25 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 4yu4127mgdkc74nzgp74exwfcorqjcl55jvbcifz8us5kkrnvum3ayfb9qlm2nakrhflj1srqfmt4cnp0pvjmnqb2vreua4rxpnys5s7rnucnooq9rji3o4q9xxz81u7xezn23a4io695wy65382wm2fg7dyun8zah99q3qqt0ynzfsrzy2mdyknlg7tvtexz5x3rfuofnw0cui0koxb3zyfwdh3o288gxms9fnqswxqwvpe2nqd0bk0wt3c1yn5j2libsgz1ke0zpvix2jvjqdkh3094m4nqumn89s3pzk71x7ej7wiw3g4049sdgw59qd4k6s8ur6ttjdq8rtl7liv66nxula9y74e0ahpd028vbhxb7s95w656n3p26d2tgda8vzl0q9qlkc584k5vmak42hnyyk86ag1aajj2rhyiw856rsvjnuoaebs6yjm6nnqgf2l64hdulbeisegpjyk0wvqo7v0rhafs1qf151fwad1hd6cn6g316g6qwz9 == \4\y\u\4\1\2\7\m\g\d\k\c\7\4\n\z\g\p\7\4\e\x\w\f\c\o\r\q\j\c\l\5\5\j\v\b\c\i\f\z\8\u\s\5\k\k\r\n\v\u\m\3\a\y\f\b\9\q\l\m\2\n\a\k\r\h\f\l\j\1\s\r\q\f\m\t\4\c\n\p\0\p\v\j\m\n\q\b\2\v\r\e\u\a\4\r\x\p\n\y\s\5\s\7\r\n\u\c\n\o\o\q\9\r\j\i\3\o\4\q\9\x\x\z\8\1\u\7\x\e\z\n\2\3\a\4\i\o\6\9\5\w\y\6\5\3\8\2\w\m\2\f\g\7\d\y\u\n\8\z\a\h\9\9\q\3\q\q\t\0\y\n\z\f\s\r\z\y\2\m\d\y\k\n\l\g\7\t\v\t\e\x\z\5\x\3\r\f\u\o\f\n\w\0\c\u\i\0\k\o\x\b\3\z\y\f\w\d\h\3\o\2\8\8\g\x\m\s\9\f\n\q\s\w\x\q\w\v\p\e\2\n\q\d\0\b\k\0\w\t\3\c\1\y\n\5\j\2\l\i\b\s\g\z\1\k\e\0\z\p\v\i\x\2\j\v\j\q\d\k\h\3\0\9\4\m\4\n\q\u\m\n\8\9\s\3\p\z\k\7\1\x\7\e\j\7\w\i\w\3\g\4\0\4\9\s\d\g\w\5\9\q\d\4\k\6\s\8\u\r\6\t\t\j\d\q\8\r\t\l\7\l\i\v\6\6\n\x\u\l\a\9\y\7\4\e\0\a\h\p\d\0\2\8\v\b\h\x\b\7\s\9\5\w\6\5\6\n\3\p\2\6\d\2\t\g\d\a\8\v\z\l\0\q\9\q\l\k\c\5\8\4\k\5\v\m\a\k\4\2\h\n\y\y\k\8\6\a\g\1\a\a\j\j\2\r\h\y\i\w\8\5\6\r\s\v\j\n\u\o\a\e\b\s\6\y\j\m\6\n\n\q\g\f\2\l\6\4\h\d\u\l\b\e\i\s\e\g\p\j\y\k\0\w\v\q\o\7\v\0\r\h\a\f\s\1\q\f\1\5\1\f\w\a\d\1\h\d\6\c\n\6\g\3\1\6\g\6\q\w\z\9 ]] 00:08:00.484 06:00:25 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:00.484 06:00:25 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:08:00.484 [2024-10-01 06:00:25.943342] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:08:00.484 [2024-10-01 06:00:25.943435] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72391 ] 00:08:00.484 [2024-10-01 06:00:26.079537] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:00.743 [2024-10-01 06:00:26.114655] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:00.743 [2024-10-01 06:00:26.142409] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:00.743  Copying: 512/512 [B] (average 500 kBps) 00:08:00.743 00:08:00.743 06:00:26 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 4yu4127mgdkc74nzgp74exwfcorqjcl55jvbcifz8us5kkrnvum3ayfb9qlm2nakrhflj1srqfmt4cnp0pvjmnqb2vreua4rxpnys5s7rnucnooq9rji3o4q9xxz81u7xezn23a4io695wy65382wm2fg7dyun8zah99q3qqt0ynzfsrzy2mdyknlg7tvtexz5x3rfuofnw0cui0koxb3zyfwdh3o288gxms9fnqswxqwvpe2nqd0bk0wt3c1yn5j2libsgz1ke0zpvix2jvjqdkh3094m4nqumn89s3pzk71x7ej7wiw3g4049sdgw59qd4k6s8ur6ttjdq8rtl7liv66nxula9y74e0ahpd028vbhxb7s95w656n3p26d2tgda8vzl0q9qlkc584k5vmak42hnyyk86ag1aajj2rhyiw856rsvjnuoaebs6yjm6nnqgf2l64hdulbeisegpjyk0wvqo7v0rhafs1qf151fwad1hd6cn6g316g6qwz9 == \4\y\u\4\1\2\7\m\g\d\k\c\7\4\n\z\g\p\7\4\e\x\w\f\c\o\r\q\j\c\l\5\5\j\v\b\c\i\f\z\8\u\s\5\k\k\r\n\v\u\m\3\a\y\f\b\9\q\l\m\2\n\a\k\r\h\f\l\j\1\s\r\q\f\m\t\4\c\n\p\0\p\v\j\m\n\q\b\2\v\r\e\u\a\4\r\x\p\n\y\s\5\s\7\r\n\u\c\n\o\o\q\9\r\j\i\3\o\4\q\9\x\x\z\8\1\u\7\x\e\z\n\2\3\a\4\i\o\6\9\5\w\y\6\5\3\8\2\w\m\2\f\g\7\d\y\u\n\8\z\a\h\9\9\q\3\q\q\t\0\y\n\z\f\s\r\z\y\2\m\d\y\k\n\l\g\7\t\v\t\e\x\z\5\x\3\r\f\u\o\f\n\w\0\c\u\i\0\k\o\x\b\3\z\y\f\w\d\h\3\o\2\8\8\g\x\m\s\9\f\n\q\s\w\x\q\w\v\p\e\2\n\q\d\0\b\k\0\w\t\3\c\1\y\n\5\j\2\l\i\b\s\g\z\1\k\e\0\z\p\v\i\x\2\j\v\j\q\d\k\h\3\0\9\4\m\4\n\q\u\m\n\8\9\s\3\p\z\k\7\1\x\7\e\j\7\w\i\w\3\g\4\0\4\9\s\d\g\w\5\9\q\d\4\k\6\s\8\u\r\6\t\t\j\d\q\8\r\t\l\7\l\i\v\6\6\n\x\u\l\a\9\y\7\4\e\0\a\h\p\d\0\2\8\v\b\h\x\b\7\s\9\5\w\6\5\6\n\3\p\2\6\d\2\t\g\d\a\8\v\z\l\0\q\9\q\l\k\c\5\8\4\k\5\v\m\a\k\4\2\h\n\y\y\k\8\6\a\g\1\a\a\j\j\2\r\h\y\i\w\8\5\6\r\s\v\j\n\u\o\a\e\b\s\6\y\j\m\6\n\n\q\g\f\2\l\6\4\h\d\u\l\b\e\i\s\e\g\p\j\y\k\0\w\v\q\o\7\v\0\r\h\a\f\s\1\q\f\1\5\1\f\w\a\d\1\h\d\6\c\n\6\g\3\1\6\g\6\q\w\z\9 ]] 00:08:00.743 06:00:26 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:08:00.743 06:00:26 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:08:00.743 06:00:26 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:08:00.743 06:00:26 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:08:00.743 06:00:26 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:00.743 06:00:26 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:08:00.743 [2024-10-01 06:00:26.347696] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:08:00.743 [2024-10-01 06:00:26.347797] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72401 ] 00:08:01.002 [2024-10-01 06:00:26.475146] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:01.002 [2024-10-01 06:00:26.508127] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:01.002 [2024-10-01 06:00:26.535625] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:01.261  Copying: 512/512 [B] (average 500 kBps) 00:08:01.261 00:08:01.261 06:00:26 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ tf3jna3ovzjkozedkb34rhksfxxxgk6jyp5e209hwh8jpfvx6dkdqsbzh7badpme14p62oclkzc552xgxuwq1yify538d1oarj3606euuxjy4z8bag8g6g35qztk5ol3h09htf0n2mxsrnmf8vodhq2i2nrjecc18dh94k9800rz0fss1depz1hekjeilpd5u91dub59ms1f24ulopawvyuttbtlvfzh9xgt3aygga4pz3r4arjbozj8bywx65kr1p76vj668exzpf5av3chviefixem71npc289klim6d5m1caamaekwkzgb3unbj9vxzmvx8besuo1lwzvy406rsuknrkzirs6ir825lrkxvhhfan23acqbr468ahjzs1c5itoc368ljyd98p8bhuhzvkqsurvovmsjhgrdrfntbwga2zs98offs86tgwqqxq5zz6pj10i7ztew36zbt30mjqzp06ajyaxkmp0jmrxa21rki6uozy065evicju4la8 == \t\f\3\j\n\a\3\o\v\z\j\k\o\z\e\d\k\b\3\4\r\h\k\s\f\x\x\x\g\k\6\j\y\p\5\e\2\0\9\h\w\h\8\j\p\f\v\x\6\d\k\d\q\s\b\z\h\7\b\a\d\p\m\e\1\4\p\6\2\o\c\l\k\z\c\5\5\2\x\g\x\u\w\q\1\y\i\f\y\5\3\8\d\1\o\a\r\j\3\6\0\6\e\u\u\x\j\y\4\z\8\b\a\g\8\g\6\g\3\5\q\z\t\k\5\o\l\3\h\0\9\h\t\f\0\n\2\m\x\s\r\n\m\f\8\v\o\d\h\q\2\i\2\n\r\j\e\c\c\1\8\d\h\9\4\k\9\8\0\0\r\z\0\f\s\s\1\d\e\p\z\1\h\e\k\j\e\i\l\p\d\5\u\9\1\d\u\b\5\9\m\s\1\f\2\4\u\l\o\p\a\w\v\y\u\t\t\b\t\l\v\f\z\h\9\x\g\t\3\a\y\g\g\a\4\p\z\3\r\4\a\r\j\b\o\z\j\8\b\y\w\x\6\5\k\r\1\p\7\6\v\j\6\6\8\e\x\z\p\f\5\a\v\3\c\h\v\i\e\f\i\x\e\m\7\1\n\p\c\2\8\9\k\l\i\m\6\d\5\m\1\c\a\a\m\a\e\k\w\k\z\g\b\3\u\n\b\j\9\v\x\z\m\v\x\8\b\e\s\u\o\1\l\w\z\v\y\4\0\6\r\s\u\k\n\r\k\z\i\r\s\6\i\r\8\2\5\l\r\k\x\v\h\h\f\a\n\2\3\a\c\q\b\r\4\6\8\a\h\j\z\s\1\c\5\i\t\o\c\3\6\8\l\j\y\d\9\8\p\8\b\h\u\h\z\v\k\q\s\u\r\v\o\v\m\s\j\h\g\r\d\r\f\n\t\b\w\g\a\2\z\s\9\8\o\f\f\s\8\6\t\g\w\q\q\x\q\5\z\z\6\p\j\1\0\i\7\z\t\e\w\3\6\z\b\t\3\0\m\j\q\z\p\0\6\a\j\y\a\x\k\m\p\0\j\m\r\x\a\2\1\r\k\i\6\u\o\z\y\0\6\5\e\v\i\c\j\u\4\l\a\8 ]] 00:08:01.261 06:00:26 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:01.261 06:00:26 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:08:01.261 [2024-10-01 06:00:26.723882] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:08:01.261 [2024-10-01 06:00:26.723992] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72405 ] 00:08:01.261 [2024-10-01 06:00:26.854026] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:01.520 [2024-10-01 06:00:26.890230] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:01.520 [2024-10-01 06:00:26.918385] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:01.520  Copying: 512/512 [B] (average 500 kBps) 00:08:01.520 00:08:01.520 06:00:27 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ tf3jna3ovzjkozedkb34rhksfxxxgk6jyp5e209hwh8jpfvx6dkdqsbzh7badpme14p62oclkzc552xgxuwq1yify538d1oarj3606euuxjy4z8bag8g6g35qztk5ol3h09htf0n2mxsrnmf8vodhq2i2nrjecc18dh94k9800rz0fss1depz1hekjeilpd5u91dub59ms1f24ulopawvyuttbtlvfzh9xgt3aygga4pz3r4arjbozj8bywx65kr1p76vj668exzpf5av3chviefixem71npc289klim6d5m1caamaekwkzgb3unbj9vxzmvx8besuo1lwzvy406rsuknrkzirs6ir825lrkxvhhfan23acqbr468ahjzs1c5itoc368ljyd98p8bhuhzvkqsurvovmsjhgrdrfntbwga2zs98offs86tgwqqxq5zz6pj10i7ztew36zbt30mjqzp06ajyaxkmp0jmrxa21rki6uozy065evicju4la8 == \t\f\3\j\n\a\3\o\v\z\j\k\o\z\e\d\k\b\3\4\r\h\k\s\f\x\x\x\g\k\6\j\y\p\5\e\2\0\9\h\w\h\8\j\p\f\v\x\6\d\k\d\q\s\b\z\h\7\b\a\d\p\m\e\1\4\p\6\2\o\c\l\k\z\c\5\5\2\x\g\x\u\w\q\1\y\i\f\y\5\3\8\d\1\o\a\r\j\3\6\0\6\e\u\u\x\j\y\4\z\8\b\a\g\8\g\6\g\3\5\q\z\t\k\5\o\l\3\h\0\9\h\t\f\0\n\2\m\x\s\r\n\m\f\8\v\o\d\h\q\2\i\2\n\r\j\e\c\c\1\8\d\h\9\4\k\9\8\0\0\r\z\0\f\s\s\1\d\e\p\z\1\h\e\k\j\e\i\l\p\d\5\u\9\1\d\u\b\5\9\m\s\1\f\2\4\u\l\o\p\a\w\v\y\u\t\t\b\t\l\v\f\z\h\9\x\g\t\3\a\y\g\g\a\4\p\z\3\r\4\a\r\j\b\o\z\j\8\b\y\w\x\6\5\k\r\1\p\7\6\v\j\6\6\8\e\x\z\p\f\5\a\v\3\c\h\v\i\e\f\i\x\e\m\7\1\n\p\c\2\8\9\k\l\i\m\6\d\5\m\1\c\a\a\m\a\e\k\w\k\z\g\b\3\u\n\b\j\9\v\x\z\m\v\x\8\b\e\s\u\o\1\l\w\z\v\y\4\0\6\r\s\u\k\n\r\k\z\i\r\s\6\i\r\8\2\5\l\r\k\x\v\h\h\f\a\n\2\3\a\c\q\b\r\4\6\8\a\h\j\z\s\1\c\5\i\t\o\c\3\6\8\l\j\y\d\9\8\p\8\b\h\u\h\z\v\k\q\s\u\r\v\o\v\m\s\j\h\g\r\d\r\f\n\t\b\w\g\a\2\z\s\9\8\o\f\f\s\8\6\t\g\w\q\q\x\q\5\z\z\6\p\j\1\0\i\7\z\t\e\w\3\6\z\b\t\3\0\m\j\q\z\p\0\6\a\j\y\a\x\k\m\p\0\j\m\r\x\a\2\1\r\k\i\6\u\o\z\y\0\6\5\e\v\i\c\j\u\4\l\a\8 ]] 00:08:01.520 06:00:27 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:01.520 06:00:27 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:08:01.520 [2024-10-01 06:00:27.119883] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:08:01.520 [2024-10-01 06:00:27.120002] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72420 ] 00:08:01.778 [2024-10-01 06:00:27.254541] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:01.779 [2024-10-01 06:00:27.287020] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:01.779 [2024-10-01 06:00:27.316739] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:02.037  Copying: 512/512 [B] (average 250 kBps) 00:08:02.037 00:08:02.037 06:00:27 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ tf3jna3ovzjkozedkb34rhksfxxxgk6jyp5e209hwh8jpfvx6dkdqsbzh7badpme14p62oclkzc552xgxuwq1yify538d1oarj3606euuxjy4z8bag8g6g35qztk5ol3h09htf0n2mxsrnmf8vodhq2i2nrjecc18dh94k9800rz0fss1depz1hekjeilpd5u91dub59ms1f24ulopawvyuttbtlvfzh9xgt3aygga4pz3r4arjbozj8bywx65kr1p76vj668exzpf5av3chviefixem71npc289klim6d5m1caamaekwkzgb3unbj9vxzmvx8besuo1lwzvy406rsuknrkzirs6ir825lrkxvhhfan23acqbr468ahjzs1c5itoc368ljyd98p8bhuhzvkqsurvovmsjhgrdrfntbwga2zs98offs86tgwqqxq5zz6pj10i7ztew36zbt30mjqzp06ajyaxkmp0jmrxa21rki6uozy065evicju4la8 == \t\f\3\j\n\a\3\o\v\z\j\k\o\z\e\d\k\b\3\4\r\h\k\s\f\x\x\x\g\k\6\j\y\p\5\e\2\0\9\h\w\h\8\j\p\f\v\x\6\d\k\d\q\s\b\z\h\7\b\a\d\p\m\e\1\4\p\6\2\o\c\l\k\z\c\5\5\2\x\g\x\u\w\q\1\y\i\f\y\5\3\8\d\1\o\a\r\j\3\6\0\6\e\u\u\x\j\y\4\z\8\b\a\g\8\g\6\g\3\5\q\z\t\k\5\o\l\3\h\0\9\h\t\f\0\n\2\m\x\s\r\n\m\f\8\v\o\d\h\q\2\i\2\n\r\j\e\c\c\1\8\d\h\9\4\k\9\8\0\0\r\z\0\f\s\s\1\d\e\p\z\1\h\e\k\j\e\i\l\p\d\5\u\9\1\d\u\b\5\9\m\s\1\f\2\4\u\l\o\p\a\w\v\y\u\t\t\b\t\l\v\f\z\h\9\x\g\t\3\a\y\g\g\a\4\p\z\3\r\4\a\r\j\b\o\z\j\8\b\y\w\x\6\5\k\r\1\p\7\6\v\j\6\6\8\e\x\z\p\f\5\a\v\3\c\h\v\i\e\f\i\x\e\m\7\1\n\p\c\2\8\9\k\l\i\m\6\d\5\m\1\c\a\a\m\a\e\k\w\k\z\g\b\3\u\n\b\j\9\v\x\z\m\v\x\8\b\e\s\u\o\1\l\w\z\v\y\4\0\6\r\s\u\k\n\r\k\z\i\r\s\6\i\r\8\2\5\l\r\k\x\v\h\h\f\a\n\2\3\a\c\q\b\r\4\6\8\a\h\j\z\s\1\c\5\i\t\o\c\3\6\8\l\j\y\d\9\8\p\8\b\h\u\h\z\v\k\q\s\u\r\v\o\v\m\s\j\h\g\r\d\r\f\n\t\b\w\g\a\2\z\s\9\8\o\f\f\s\8\6\t\g\w\q\q\x\q\5\z\z\6\p\j\1\0\i\7\z\t\e\w\3\6\z\b\t\3\0\m\j\q\z\p\0\6\a\j\y\a\x\k\m\p\0\j\m\r\x\a\2\1\r\k\i\6\u\o\z\y\0\6\5\e\v\i\c\j\u\4\l\a\8 ]] 00:08:02.037 06:00:27 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:02.037 06:00:27 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:08:02.037 [2024-10-01 06:00:27.512412] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:08:02.037 [2024-10-01 06:00:27.512513] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72424 ] 00:08:02.037 [2024-10-01 06:00:27.648224] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:02.296 [2024-10-01 06:00:27.682354] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:02.296 [2024-10-01 06:00:27.709659] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:02.296  Copying: 512/512 [B] (average 250 kBps) 00:08:02.296 00:08:02.296 06:00:27 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ tf3jna3ovzjkozedkb34rhksfxxxgk6jyp5e209hwh8jpfvx6dkdqsbzh7badpme14p62oclkzc552xgxuwq1yify538d1oarj3606euuxjy4z8bag8g6g35qztk5ol3h09htf0n2mxsrnmf8vodhq2i2nrjecc18dh94k9800rz0fss1depz1hekjeilpd5u91dub59ms1f24ulopawvyuttbtlvfzh9xgt3aygga4pz3r4arjbozj8bywx65kr1p76vj668exzpf5av3chviefixem71npc289klim6d5m1caamaekwkzgb3unbj9vxzmvx8besuo1lwzvy406rsuknrkzirs6ir825lrkxvhhfan23acqbr468ahjzs1c5itoc368ljyd98p8bhuhzvkqsurvovmsjhgrdrfntbwga2zs98offs86tgwqqxq5zz6pj10i7ztew36zbt30mjqzp06ajyaxkmp0jmrxa21rki6uozy065evicju4la8 == \t\f\3\j\n\a\3\o\v\z\j\k\o\z\e\d\k\b\3\4\r\h\k\s\f\x\x\x\g\k\6\j\y\p\5\e\2\0\9\h\w\h\8\j\p\f\v\x\6\d\k\d\q\s\b\z\h\7\b\a\d\p\m\e\1\4\p\6\2\o\c\l\k\z\c\5\5\2\x\g\x\u\w\q\1\y\i\f\y\5\3\8\d\1\o\a\r\j\3\6\0\6\e\u\u\x\j\y\4\z\8\b\a\g\8\g\6\g\3\5\q\z\t\k\5\o\l\3\h\0\9\h\t\f\0\n\2\m\x\s\r\n\m\f\8\v\o\d\h\q\2\i\2\n\r\j\e\c\c\1\8\d\h\9\4\k\9\8\0\0\r\z\0\f\s\s\1\d\e\p\z\1\h\e\k\j\e\i\l\p\d\5\u\9\1\d\u\b\5\9\m\s\1\f\2\4\u\l\o\p\a\w\v\y\u\t\t\b\t\l\v\f\z\h\9\x\g\t\3\a\y\g\g\a\4\p\z\3\r\4\a\r\j\b\o\z\j\8\b\y\w\x\6\5\k\r\1\p\7\6\v\j\6\6\8\e\x\z\p\f\5\a\v\3\c\h\v\i\e\f\i\x\e\m\7\1\n\p\c\2\8\9\k\l\i\m\6\d\5\m\1\c\a\a\m\a\e\k\w\k\z\g\b\3\u\n\b\j\9\v\x\z\m\v\x\8\b\e\s\u\o\1\l\w\z\v\y\4\0\6\r\s\u\k\n\r\k\z\i\r\s\6\i\r\8\2\5\l\r\k\x\v\h\h\f\a\n\2\3\a\c\q\b\r\4\6\8\a\h\j\z\s\1\c\5\i\t\o\c\3\6\8\l\j\y\d\9\8\p\8\b\h\u\h\z\v\k\q\s\u\r\v\o\v\m\s\j\h\g\r\d\r\f\n\t\b\w\g\a\2\z\s\9\8\o\f\f\s\8\6\t\g\w\q\q\x\q\5\z\z\6\p\j\1\0\i\7\z\t\e\w\3\6\z\b\t\3\0\m\j\q\z\p\0\6\a\j\y\a\x\k\m\p\0\j\m\r\x\a\2\1\r\k\i\6\u\o\z\y\0\6\5\e\v\i\c\j\u\4\l\a\8 ]] 00:08:02.296 00:08:02.296 real 0m3.114s 00:08:02.296 user 0m1.518s 00:08:02.296 sys 0m1.357s 00:08:02.296 06:00:27 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:02.296 ************************************ 00:08:02.296 END TEST dd_flags_misc 00:08:02.296 ************************************ 00:08:02.296 06:00:27 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:08:02.296 06:00:27 spdk_dd.spdk_dd_posix -- dd/posix.sh@131 -- # tests_forced_aio 00:08:02.296 06:00:27 spdk_dd.spdk_dd_posix -- dd/posix.sh@110 -- # printf '* Second test run%s\n' ', disabling liburing, forcing AIO' 00:08:02.296 * Second test run, disabling liburing, forcing AIO 00:08:02.296 06:00:27 spdk_dd.spdk_dd_posix -- dd/posix.sh@113 -- # DD_APP+=("--aio") 00:08:02.296 06:00:27 spdk_dd.spdk_dd_posix -- dd/posix.sh@114 -- # run_test dd_flag_append_forced_aio append 00:08:02.296 06:00:27 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:02.296 06:00:27 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:02.296 06:00:27 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:08:02.296 ************************************ 00:08:02.296 START TEST dd_flag_append_forced_aio 00:08:02.296 ************************************ 00:08:02.296 06:00:27 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1125 -- # append 00:08:02.296 06:00:27 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@16 -- # local dump0 00:08:02.296 06:00:27 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@17 -- # local dump1 00:08:02.296 06:00:27 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # gen_bytes 32 00:08:02.296 06:00:27 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:08:02.296 06:00:27 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:08:02.296 06:00:27 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # dump0=4xt8ltnzswgzx1zjsuoeby2755tex26k 00:08:02.554 06:00:27 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # gen_bytes 32 00:08:02.554 06:00:27 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:08:02.554 06:00:27 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:08:02.554 06:00:27 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # dump1=5v2il1agnf2s0waqy0a1yar80qbdsqv3 00:08:02.554 06:00:27 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@22 -- # printf %s 4xt8ltnzswgzx1zjsuoeby2755tex26k 00:08:02.554 06:00:27 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@23 -- # printf %s 5v2il1agnf2s0waqy0a1yar80qbdsqv3 00:08:02.554 06:00:27 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:08:02.554 [2024-10-01 06:00:27.964867] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:08:02.554 [2024-10-01 06:00:27.965008] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72451 ] 00:08:02.554 [2024-10-01 06:00:28.098557] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:02.554 [2024-10-01 06:00:28.131751] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:02.554 [2024-10-01 06:00:28.159598] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:02.812  Copying: 32/32 [B] (average 31 kBps) 00:08:02.812 00:08:02.812 06:00:28 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@27 -- # [[ 5v2il1agnf2s0waqy0a1yar80qbdsqv34xt8ltnzswgzx1zjsuoeby2755tex26k == \5\v\2\i\l\1\a\g\n\f\2\s\0\w\a\q\y\0\a\1\y\a\r\8\0\q\b\d\s\q\v\3\4\x\t\8\l\t\n\z\s\w\g\z\x\1\z\j\s\u\o\e\b\y\2\7\5\5\t\e\x\2\6\k ]] 00:08:02.812 00:08:02.812 real 0m0.416s 00:08:02.812 user 0m0.188s 00:08:02.812 sys 0m0.108s 00:08:02.812 06:00:28 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:02.812 ************************************ 00:08:02.812 END TEST dd_flag_append_forced_aio 00:08:02.812 ************************************ 00:08:02.812 06:00:28 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:08:02.812 06:00:28 spdk_dd.spdk_dd_posix -- dd/posix.sh@115 -- # run_test dd_flag_directory_forced_aio directory 00:08:02.812 06:00:28 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:02.813 06:00:28 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:02.813 06:00:28 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:08:02.813 ************************************ 00:08:02.813 START TEST dd_flag_directory_forced_aio 00:08:02.813 ************************************ 00:08:02.813 06:00:28 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1125 -- # directory 00:08:02.813 06:00:28 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:02.813 06:00:28 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@650 -- # local es=0 00:08:02.813 06:00:28 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:02.813 06:00:28 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:02.813 06:00:28 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:02.813 06:00:28 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:02.813 06:00:28 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:02.813 06:00:28 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:02.813 06:00:28 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:02.813 06:00:28 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:02.813 06:00:28 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:02.813 06:00:28 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:03.071 [2024-10-01 06:00:28.431218] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:08:03.071 [2024-10-01 06:00:28.431365] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72479 ] 00:08:03.071 [2024-10-01 06:00:28.569034] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:03.071 [2024-10-01 06:00:28.602002] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:03.071 [2024-10-01 06:00:28.633518] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:03.071 [2024-10-01 06:00:28.649988] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:08:03.071 [2024-10-01 06:00:28.650103] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:08:03.071 [2024-10-01 06:00:28.650131] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:03.329 [2024-10-01 06:00:28.709796] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:03.330 06:00:28 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@653 -- # es=236 00:08:03.330 06:00:28 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:03.330 06:00:28 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@662 -- # es=108 00:08:03.330 06:00:28 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@663 -- # case "$es" in 00:08:03.330 06:00:28 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@670 -- # es=1 00:08:03.330 06:00:28 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:03.330 06:00:28 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:08:03.330 06:00:28 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@650 -- # local es=0 00:08:03.330 06:00:28 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:08:03.330 06:00:28 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:03.330 06:00:28 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:03.330 06:00:28 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:03.330 06:00:28 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:03.330 06:00:28 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:03.330 06:00:28 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:03.330 06:00:28 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:03.330 06:00:28 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:03.330 06:00:28 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:08:03.330 [2024-10-01 06:00:28.828483] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:08:03.330 [2024-10-01 06:00:28.828599] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72483 ] 00:08:03.589 [2024-10-01 06:00:28.964791] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:03.589 [2024-10-01 06:00:29.004524] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:03.589 [2024-10-01 06:00:29.033746] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:03.589 [2024-10-01 06:00:29.048896] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:08:03.589 [2024-10-01 06:00:29.048972] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:08:03.589 [2024-10-01 06:00:29.049001] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:03.589 [2024-10-01 06:00:29.111340] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:03.589 06:00:29 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@653 -- # es=236 00:08:03.589 06:00:29 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:03.589 06:00:29 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@662 -- # es=108 00:08:03.589 06:00:29 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@663 -- # case "$es" in 00:08:03.589 06:00:29 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@670 -- # es=1 00:08:03.589 06:00:29 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:03.589 00:08:03.589 real 0m0.804s 00:08:03.589 user 0m0.396s 00:08:03.589 sys 0m0.201s 00:08:03.589 06:00:29 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:03.589 06:00:29 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:08:03.589 ************************************ 00:08:03.589 END TEST dd_flag_directory_forced_aio 00:08:03.589 ************************************ 00:08:03.848 06:00:29 spdk_dd.spdk_dd_posix -- dd/posix.sh@116 -- # run_test dd_flag_nofollow_forced_aio nofollow 00:08:03.848 06:00:29 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:03.848 06:00:29 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:03.848 06:00:29 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:08:03.848 ************************************ 00:08:03.848 START TEST dd_flag_nofollow_forced_aio 00:08:03.848 ************************************ 00:08:03.848 06:00:29 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1125 -- # nofollow 00:08:03.848 06:00:29 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:08:03.848 06:00:29 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:08:03.848 06:00:29 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:08:03.848 06:00:29 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:08:03.848 06:00:29 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:03.848 06:00:29 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@650 -- # local es=0 00:08:03.848 06:00:29 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:03.848 06:00:29 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:03.848 06:00:29 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:03.849 06:00:29 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:03.849 06:00:29 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:03.849 06:00:29 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:03.849 06:00:29 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:03.849 06:00:29 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:03.849 06:00:29 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:03.849 06:00:29 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:03.849 [2024-10-01 06:00:29.296500] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:08:03.849 [2024-10-01 06:00:29.296620] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72517 ] 00:08:03.849 [2024-10-01 06:00:29.432479] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:04.108 [2024-10-01 06:00:29.466084] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:04.108 [2024-10-01 06:00:29.494166] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:04.108 [2024-10-01 06:00:29.509389] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:08:04.108 [2024-10-01 06:00:29.509457] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:08:04.108 [2024-10-01 06:00:29.509486] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:04.108 [2024-10-01 06:00:29.567659] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:04.108 06:00:29 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@653 -- # es=216 00:08:04.108 06:00:29 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:04.108 06:00:29 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@662 -- # es=88 00:08:04.108 06:00:29 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@663 -- # case "$es" in 00:08:04.108 06:00:29 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@670 -- # es=1 00:08:04.108 06:00:29 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:04.108 06:00:29 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:08:04.108 06:00:29 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@650 -- # local es=0 00:08:04.108 06:00:29 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:08:04.108 06:00:29 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:04.108 06:00:29 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:04.108 06:00:29 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:04.108 06:00:29 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:04.108 06:00:29 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:04.108 06:00:29 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:04.108 06:00:29 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:04.108 06:00:29 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:04.108 06:00:29 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:08:04.108 [2024-10-01 06:00:29.685165] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:08:04.108 [2024-10-01 06:00:29.685267] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72521 ] 00:08:04.367 [2024-10-01 06:00:29.820447] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:04.367 [2024-10-01 06:00:29.853034] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:04.367 [2024-10-01 06:00:29.880584] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:04.367 [2024-10-01 06:00:29.895675] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:08:04.367 [2024-10-01 06:00:29.895744] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:08:04.367 [2024-10-01 06:00:29.895773] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:04.367 [2024-10-01 06:00:29.954525] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:04.626 06:00:30 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@653 -- # es=216 00:08:04.626 06:00:30 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:04.626 06:00:30 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@662 -- # es=88 00:08:04.626 06:00:30 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@663 -- # case "$es" in 00:08:04.626 06:00:30 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@670 -- # es=1 00:08:04.626 06:00:30 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:04.626 06:00:30 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@46 -- # gen_bytes 512 00:08:04.626 06:00:30 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:08:04.626 06:00:30 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:08:04.626 06:00:30 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:04.626 [2024-10-01 06:00:30.075456] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:08:04.626 [2024-10-01 06:00:30.075560] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72534 ] 00:08:04.626 [2024-10-01 06:00:30.206105] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:04.626 [2024-10-01 06:00:30.240339] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:04.885 [2024-10-01 06:00:30.268399] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:04.885  Copying: 512/512 [B] (average 500 kBps) 00:08:04.885 00:08:04.885 06:00:30 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@49 -- # [[ 5mu746qlqlzgq8yqlx85uprtxj2hjope4m989u13u9il5dzviypjqelgh1e1m5t04c8v7zlgoju6d7g6j7g3nn346qx7ew2yrhvdqvccaefyc5cnq4xkkyluh3ynhlqhaeaor5almdfzby1poxxczpqyjhzop5gds5k5zhp3fa1e9hlyzsm5fbl4carn1k3yzuzcd0bh0c3khpk50opnogf3qst7hbfc3yijx8weqd7ipfmoapjrm5o6mu0mpgezaee1kisgdbaeaene6meq27dc1qb4ftshasl01bt63tdne7nyt9i3rn1zm6e3gaei8xeedv0rabhiswes0fhc54lnjfc7mloz575i7346gv0xaebdkrak5bw5z0xw4p6pw3uh4ace852wduuvdh5l871akaoshb5rhqx83h3l0uw33m4jif4xy8mhnbx6xc5720fiomj1qofz0oi329xa33tsm5y2dpdz12xi2cglrr0ff6wf8peg3ipyhgj6tsk9 == \5\m\u\7\4\6\q\l\q\l\z\g\q\8\y\q\l\x\8\5\u\p\r\t\x\j\2\h\j\o\p\e\4\m\9\8\9\u\1\3\u\9\i\l\5\d\z\v\i\y\p\j\q\e\l\g\h\1\e\1\m\5\t\0\4\c\8\v\7\z\l\g\o\j\u\6\d\7\g\6\j\7\g\3\n\n\3\4\6\q\x\7\e\w\2\y\r\h\v\d\q\v\c\c\a\e\f\y\c\5\c\n\q\4\x\k\k\y\l\u\h\3\y\n\h\l\q\h\a\e\a\o\r\5\a\l\m\d\f\z\b\y\1\p\o\x\x\c\z\p\q\y\j\h\z\o\p\5\g\d\s\5\k\5\z\h\p\3\f\a\1\e\9\h\l\y\z\s\m\5\f\b\l\4\c\a\r\n\1\k\3\y\z\u\z\c\d\0\b\h\0\c\3\k\h\p\k\5\0\o\p\n\o\g\f\3\q\s\t\7\h\b\f\c\3\y\i\j\x\8\w\e\q\d\7\i\p\f\m\o\a\p\j\r\m\5\o\6\m\u\0\m\p\g\e\z\a\e\e\1\k\i\s\g\d\b\a\e\a\e\n\e\6\m\e\q\2\7\d\c\1\q\b\4\f\t\s\h\a\s\l\0\1\b\t\6\3\t\d\n\e\7\n\y\t\9\i\3\r\n\1\z\m\6\e\3\g\a\e\i\8\x\e\e\d\v\0\r\a\b\h\i\s\w\e\s\0\f\h\c\5\4\l\n\j\f\c\7\m\l\o\z\5\7\5\i\7\3\4\6\g\v\0\x\a\e\b\d\k\r\a\k\5\b\w\5\z\0\x\w\4\p\6\p\w\3\u\h\4\a\c\e\8\5\2\w\d\u\u\v\d\h\5\l\8\7\1\a\k\a\o\s\h\b\5\r\h\q\x\8\3\h\3\l\0\u\w\3\3\m\4\j\i\f\4\x\y\8\m\h\n\b\x\6\x\c\5\7\2\0\f\i\o\m\j\1\q\o\f\z\0\o\i\3\2\9\x\a\3\3\t\s\m\5\y\2\d\p\d\z\1\2\x\i\2\c\g\l\r\r\0\f\f\6\w\f\8\p\e\g\3\i\p\y\h\g\j\6\t\s\k\9 ]] 00:08:04.885 00:08:04.885 real 0m1.200s 00:08:04.885 user 0m0.582s 00:08:04.885 sys 0m0.290s 00:08:04.885 06:00:30 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:04.885 ************************************ 00:08:04.885 END TEST dd_flag_nofollow_forced_aio 00:08:04.885 ************************************ 00:08:04.886 06:00:30 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:08:04.886 06:00:30 spdk_dd.spdk_dd_posix -- dd/posix.sh@117 -- # run_test dd_flag_noatime_forced_aio noatime 00:08:04.886 06:00:30 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:04.886 06:00:30 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:04.886 06:00:30 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:08:04.886 ************************************ 00:08:04.886 START TEST dd_flag_noatime_forced_aio 00:08:04.886 ************************************ 00:08:04.886 06:00:30 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1125 -- # noatime 00:08:04.886 06:00:30 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@53 -- # local atime_if 00:08:04.886 06:00:30 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@54 -- # local atime_of 00:08:04.886 06:00:30 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@58 -- # gen_bytes 512 00:08:04.886 06:00:30 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:08:04.886 06:00:30 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:08:04.886 06:00:30 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:05.162 06:00:30 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # atime_if=1727762430 00:08:05.162 06:00:30 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:05.162 06:00:30 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # atime_of=1727762430 00:08:05.162 06:00:30 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@66 -- # sleep 1 00:08:06.106 06:00:31 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:06.106 [2024-10-01 06:00:31.566005] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:08:06.106 [2024-10-01 06:00:31.566102] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72571 ] 00:08:06.106 [2024-10-01 06:00:31.706005] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:06.365 [2024-10-01 06:00:31.749188] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:06.365 [2024-10-01 06:00:31.782906] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:06.365  Copying: 512/512 [B] (average 500 kBps) 00:08:06.365 00:08:06.365 06:00:31 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:06.365 06:00:31 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # (( atime_if == 1727762430 )) 00:08:06.365 06:00:31 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:06.365 06:00:31 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # (( atime_of == 1727762430 )) 00:08:06.365 06:00:31 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:06.623 [2024-10-01 06:00:32.030989] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:08:06.623 [2024-10-01 06:00:32.031087] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72577 ] 00:08:06.623 [2024-10-01 06:00:32.167589] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:06.623 [2024-10-01 06:00:32.200709] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:06.623 [2024-10-01 06:00:32.228701] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:06.882  Copying: 512/512 [B] (average 500 kBps) 00:08:06.882 00:08:06.882 06:00:32 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:06.882 06:00:32 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # (( atime_if < 1727762432 )) 00:08:06.882 00:08:06.882 real 0m1.909s 00:08:06.882 user 0m0.440s 00:08:06.882 sys 0m0.225s 00:08:06.882 06:00:32 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:06.882 ************************************ 00:08:06.882 END TEST dd_flag_noatime_forced_aio 00:08:06.882 06:00:32 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:08:06.882 ************************************ 00:08:06.882 06:00:32 spdk_dd.spdk_dd_posix -- dd/posix.sh@118 -- # run_test dd_flags_misc_forced_aio io 00:08:06.882 06:00:32 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:06.882 06:00:32 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:06.882 06:00:32 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:08:06.882 ************************************ 00:08:06.882 START TEST dd_flags_misc_forced_aio 00:08:06.882 ************************************ 00:08:06.882 06:00:32 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1125 -- # io 00:08:06.882 06:00:32 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:08:06.882 06:00:32 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:08:06.882 06:00:32 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:08:06.882 06:00:32 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:08:06.882 06:00:32 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:08:06.882 06:00:32 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:08:06.882 06:00:32 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:08:06.882 06:00:32 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:06.882 06:00:32 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:08:07.140 [2024-10-01 06:00:32.511763] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:08:07.140 [2024-10-01 06:00:32.511850] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72609 ] 00:08:07.140 [2024-10-01 06:00:32.647630] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:07.140 [2024-10-01 06:00:32.680248] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:07.140 [2024-10-01 06:00:32.708637] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:07.399  Copying: 512/512 [B] (average 500 kBps) 00:08:07.399 00:08:07.399 06:00:32 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ dhoty8p9n6wqplcv7q4lazgu4ipmy1qx0w7hztn30akp36rp6m96pd4ayv1nfqwcpy6on6tjhitmz99crlyraql8ncutbaazkti0r1b8gc51082cyuq3rt4oyswlh6jt8mtwk6q1106j8gfrh07gcg8zd1867jhbe9lvxxk0tp1rc7za674gb2y0iofkofta93wlzpz2hh0i9th4howtsv0oly6j14rbiwrf7f60upsr8mf2f6va57h0geshepgcf4gml5or4rg5call8col116irmn3ysknisfhooj5cvf461r5q4dzczi6mcn8m6dkmmqt7snb9m96dtc2nqwtn77qymxhkfzwc0swsrlcsflo9olvclr5a0tedicrrxida7c8slwutru5zyeau4byj1a5u3d7xnq8b4bc5g19vsy341t6a50w1580v9hwhdveb43lw2chif1ujclmyylp5iobj5gptzfsqficfdiokvqzhmeyoik8hrh2e4lccclb == \d\h\o\t\y\8\p\9\n\6\w\q\p\l\c\v\7\q\4\l\a\z\g\u\4\i\p\m\y\1\q\x\0\w\7\h\z\t\n\3\0\a\k\p\3\6\r\p\6\m\9\6\p\d\4\a\y\v\1\n\f\q\w\c\p\y\6\o\n\6\t\j\h\i\t\m\z\9\9\c\r\l\y\r\a\q\l\8\n\c\u\t\b\a\a\z\k\t\i\0\r\1\b\8\g\c\5\1\0\8\2\c\y\u\q\3\r\t\4\o\y\s\w\l\h\6\j\t\8\m\t\w\k\6\q\1\1\0\6\j\8\g\f\r\h\0\7\g\c\g\8\z\d\1\8\6\7\j\h\b\e\9\l\v\x\x\k\0\t\p\1\r\c\7\z\a\6\7\4\g\b\2\y\0\i\o\f\k\o\f\t\a\9\3\w\l\z\p\z\2\h\h\0\i\9\t\h\4\h\o\w\t\s\v\0\o\l\y\6\j\1\4\r\b\i\w\r\f\7\f\6\0\u\p\s\r\8\m\f\2\f\6\v\a\5\7\h\0\g\e\s\h\e\p\g\c\f\4\g\m\l\5\o\r\4\r\g\5\c\a\l\l\8\c\o\l\1\1\6\i\r\m\n\3\y\s\k\n\i\s\f\h\o\o\j\5\c\v\f\4\6\1\r\5\q\4\d\z\c\z\i\6\m\c\n\8\m\6\d\k\m\m\q\t\7\s\n\b\9\m\9\6\d\t\c\2\n\q\w\t\n\7\7\q\y\m\x\h\k\f\z\w\c\0\s\w\s\r\l\c\s\f\l\o\9\o\l\v\c\l\r\5\a\0\t\e\d\i\c\r\r\x\i\d\a\7\c\8\s\l\w\u\t\r\u\5\z\y\e\a\u\4\b\y\j\1\a\5\u\3\d\7\x\n\q\8\b\4\b\c\5\g\1\9\v\s\y\3\4\1\t\6\a\5\0\w\1\5\8\0\v\9\h\w\h\d\v\e\b\4\3\l\w\2\c\h\i\f\1\u\j\c\l\m\y\y\l\p\5\i\o\b\j\5\g\p\t\z\f\s\q\f\i\c\f\d\i\o\k\v\q\z\h\m\e\y\o\i\k\8\h\r\h\2\e\4\l\c\c\c\l\b ]] 00:08:07.399 06:00:32 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:07.399 06:00:32 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:08:07.399 [2024-10-01 06:00:32.941805] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:08:07.399 [2024-10-01 06:00:32.941916] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72611 ] 00:08:07.658 [2024-10-01 06:00:33.076643] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:07.658 [2024-10-01 06:00:33.113486] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:07.658 [2024-10-01 06:00:33.144396] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:07.915  Copying: 512/512 [B] (average 500 kBps) 00:08:07.915 00:08:07.915 06:00:33 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ dhoty8p9n6wqplcv7q4lazgu4ipmy1qx0w7hztn30akp36rp6m96pd4ayv1nfqwcpy6on6tjhitmz99crlyraql8ncutbaazkti0r1b8gc51082cyuq3rt4oyswlh6jt8mtwk6q1106j8gfrh07gcg8zd1867jhbe9lvxxk0tp1rc7za674gb2y0iofkofta93wlzpz2hh0i9th4howtsv0oly6j14rbiwrf7f60upsr8mf2f6va57h0geshepgcf4gml5or4rg5call8col116irmn3ysknisfhooj5cvf461r5q4dzczi6mcn8m6dkmmqt7snb9m96dtc2nqwtn77qymxhkfzwc0swsrlcsflo9olvclr5a0tedicrrxida7c8slwutru5zyeau4byj1a5u3d7xnq8b4bc5g19vsy341t6a50w1580v9hwhdveb43lw2chif1ujclmyylp5iobj5gptzfsqficfdiokvqzhmeyoik8hrh2e4lccclb == \d\h\o\t\y\8\p\9\n\6\w\q\p\l\c\v\7\q\4\l\a\z\g\u\4\i\p\m\y\1\q\x\0\w\7\h\z\t\n\3\0\a\k\p\3\6\r\p\6\m\9\6\p\d\4\a\y\v\1\n\f\q\w\c\p\y\6\o\n\6\t\j\h\i\t\m\z\9\9\c\r\l\y\r\a\q\l\8\n\c\u\t\b\a\a\z\k\t\i\0\r\1\b\8\g\c\5\1\0\8\2\c\y\u\q\3\r\t\4\o\y\s\w\l\h\6\j\t\8\m\t\w\k\6\q\1\1\0\6\j\8\g\f\r\h\0\7\g\c\g\8\z\d\1\8\6\7\j\h\b\e\9\l\v\x\x\k\0\t\p\1\r\c\7\z\a\6\7\4\g\b\2\y\0\i\o\f\k\o\f\t\a\9\3\w\l\z\p\z\2\h\h\0\i\9\t\h\4\h\o\w\t\s\v\0\o\l\y\6\j\1\4\r\b\i\w\r\f\7\f\6\0\u\p\s\r\8\m\f\2\f\6\v\a\5\7\h\0\g\e\s\h\e\p\g\c\f\4\g\m\l\5\o\r\4\r\g\5\c\a\l\l\8\c\o\l\1\1\6\i\r\m\n\3\y\s\k\n\i\s\f\h\o\o\j\5\c\v\f\4\6\1\r\5\q\4\d\z\c\z\i\6\m\c\n\8\m\6\d\k\m\m\q\t\7\s\n\b\9\m\9\6\d\t\c\2\n\q\w\t\n\7\7\q\y\m\x\h\k\f\z\w\c\0\s\w\s\r\l\c\s\f\l\o\9\o\l\v\c\l\r\5\a\0\t\e\d\i\c\r\r\x\i\d\a\7\c\8\s\l\w\u\t\r\u\5\z\y\e\a\u\4\b\y\j\1\a\5\u\3\d\7\x\n\q\8\b\4\b\c\5\g\1\9\v\s\y\3\4\1\t\6\a\5\0\w\1\5\8\0\v\9\h\w\h\d\v\e\b\4\3\l\w\2\c\h\i\f\1\u\j\c\l\m\y\y\l\p\5\i\o\b\j\5\g\p\t\z\f\s\q\f\i\c\f\d\i\o\k\v\q\z\h\m\e\y\o\i\k\8\h\r\h\2\e\4\l\c\c\c\l\b ]] 00:08:07.915 06:00:33 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:07.915 06:00:33 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:08:07.915 [2024-10-01 06:00:33.360423] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:08:07.915 [2024-10-01 06:00:33.360529] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72624 ] 00:08:07.915 [2024-10-01 06:00:33.486462] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:07.915 [2024-10-01 06:00:33.519448] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:08.174 [2024-10-01 06:00:33.547968] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:08.174  Copying: 512/512 [B] (average 500 kBps) 00:08:08.174 00:08:08.174 06:00:33 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ dhoty8p9n6wqplcv7q4lazgu4ipmy1qx0w7hztn30akp36rp6m96pd4ayv1nfqwcpy6on6tjhitmz99crlyraql8ncutbaazkti0r1b8gc51082cyuq3rt4oyswlh6jt8mtwk6q1106j8gfrh07gcg8zd1867jhbe9lvxxk0tp1rc7za674gb2y0iofkofta93wlzpz2hh0i9th4howtsv0oly6j14rbiwrf7f60upsr8mf2f6va57h0geshepgcf4gml5or4rg5call8col116irmn3ysknisfhooj5cvf461r5q4dzczi6mcn8m6dkmmqt7snb9m96dtc2nqwtn77qymxhkfzwc0swsrlcsflo9olvclr5a0tedicrrxida7c8slwutru5zyeau4byj1a5u3d7xnq8b4bc5g19vsy341t6a50w1580v9hwhdveb43lw2chif1ujclmyylp5iobj5gptzfsqficfdiokvqzhmeyoik8hrh2e4lccclb == \d\h\o\t\y\8\p\9\n\6\w\q\p\l\c\v\7\q\4\l\a\z\g\u\4\i\p\m\y\1\q\x\0\w\7\h\z\t\n\3\0\a\k\p\3\6\r\p\6\m\9\6\p\d\4\a\y\v\1\n\f\q\w\c\p\y\6\o\n\6\t\j\h\i\t\m\z\9\9\c\r\l\y\r\a\q\l\8\n\c\u\t\b\a\a\z\k\t\i\0\r\1\b\8\g\c\5\1\0\8\2\c\y\u\q\3\r\t\4\o\y\s\w\l\h\6\j\t\8\m\t\w\k\6\q\1\1\0\6\j\8\g\f\r\h\0\7\g\c\g\8\z\d\1\8\6\7\j\h\b\e\9\l\v\x\x\k\0\t\p\1\r\c\7\z\a\6\7\4\g\b\2\y\0\i\o\f\k\o\f\t\a\9\3\w\l\z\p\z\2\h\h\0\i\9\t\h\4\h\o\w\t\s\v\0\o\l\y\6\j\1\4\r\b\i\w\r\f\7\f\6\0\u\p\s\r\8\m\f\2\f\6\v\a\5\7\h\0\g\e\s\h\e\p\g\c\f\4\g\m\l\5\o\r\4\r\g\5\c\a\l\l\8\c\o\l\1\1\6\i\r\m\n\3\y\s\k\n\i\s\f\h\o\o\j\5\c\v\f\4\6\1\r\5\q\4\d\z\c\z\i\6\m\c\n\8\m\6\d\k\m\m\q\t\7\s\n\b\9\m\9\6\d\t\c\2\n\q\w\t\n\7\7\q\y\m\x\h\k\f\z\w\c\0\s\w\s\r\l\c\s\f\l\o\9\o\l\v\c\l\r\5\a\0\t\e\d\i\c\r\r\x\i\d\a\7\c\8\s\l\w\u\t\r\u\5\z\y\e\a\u\4\b\y\j\1\a\5\u\3\d\7\x\n\q\8\b\4\b\c\5\g\1\9\v\s\y\3\4\1\t\6\a\5\0\w\1\5\8\0\v\9\h\w\h\d\v\e\b\4\3\l\w\2\c\h\i\f\1\u\j\c\l\m\y\y\l\p\5\i\o\b\j\5\g\p\t\z\f\s\q\f\i\c\f\d\i\o\k\v\q\z\h\m\e\y\o\i\k\8\h\r\h\2\e\4\l\c\c\c\l\b ]] 00:08:08.174 06:00:33 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:08.174 06:00:33 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:08:08.174 [2024-10-01 06:00:33.759335] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:08:08.174 [2024-10-01 06:00:33.759442] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72626 ] 00:08:08.432 [2024-10-01 06:00:33.894633] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:08.432 [2024-10-01 06:00:33.927240] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:08.432 [2024-10-01 06:00:33.956898] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:08.691  Copying: 512/512 [B] (average 500 kBps) 00:08:08.691 00:08:08.691 06:00:34 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ dhoty8p9n6wqplcv7q4lazgu4ipmy1qx0w7hztn30akp36rp6m96pd4ayv1nfqwcpy6on6tjhitmz99crlyraql8ncutbaazkti0r1b8gc51082cyuq3rt4oyswlh6jt8mtwk6q1106j8gfrh07gcg8zd1867jhbe9lvxxk0tp1rc7za674gb2y0iofkofta93wlzpz2hh0i9th4howtsv0oly6j14rbiwrf7f60upsr8mf2f6va57h0geshepgcf4gml5or4rg5call8col116irmn3ysknisfhooj5cvf461r5q4dzczi6mcn8m6dkmmqt7snb9m96dtc2nqwtn77qymxhkfzwc0swsrlcsflo9olvclr5a0tedicrrxida7c8slwutru5zyeau4byj1a5u3d7xnq8b4bc5g19vsy341t6a50w1580v9hwhdveb43lw2chif1ujclmyylp5iobj5gptzfsqficfdiokvqzhmeyoik8hrh2e4lccclb == \d\h\o\t\y\8\p\9\n\6\w\q\p\l\c\v\7\q\4\l\a\z\g\u\4\i\p\m\y\1\q\x\0\w\7\h\z\t\n\3\0\a\k\p\3\6\r\p\6\m\9\6\p\d\4\a\y\v\1\n\f\q\w\c\p\y\6\o\n\6\t\j\h\i\t\m\z\9\9\c\r\l\y\r\a\q\l\8\n\c\u\t\b\a\a\z\k\t\i\0\r\1\b\8\g\c\5\1\0\8\2\c\y\u\q\3\r\t\4\o\y\s\w\l\h\6\j\t\8\m\t\w\k\6\q\1\1\0\6\j\8\g\f\r\h\0\7\g\c\g\8\z\d\1\8\6\7\j\h\b\e\9\l\v\x\x\k\0\t\p\1\r\c\7\z\a\6\7\4\g\b\2\y\0\i\o\f\k\o\f\t\a\9\3\w\l\z\p\z\2\h\h\0\i\9\t\h\4\h\o\w\t\s\v\0\o\l\y\6\j\1\4\r\b\i\w\r\f\7\f\6\0\u\p\s\r\8\m\f\2\f\6\v\a\5\7\h\0\g\e\s\h\e\p\g\c\f\4\g\m\l\5\o\r\4\r\g\5\c\a\l\l\8\c\o\l\1\1\6\i\r\m\n\3\y\s\k\n\i\s\f\h\o\o\j\5\c\v\f\4\6\1\r\5\q\4\d\z\c\z\i\6\m\c\n\8\m\6\d\k\m\m\q\t\7\s\n\b\9\m\9\6\d\t\c\2\n\q\w\t\n\7\7\q\y\m\x\h\k\f\z\w\c\0\s\w\s\r\l\c\s\f\l\o\9\o\l\v\c\l\r\5\a\0\t\e\d\i\c\r\r\x\i\d\a\7\c\8\s\l\w\u\t\r\u\5\z\y\e\a\u\4\b\y\j\1\a\5\u\3\d\7\x\n\q\8\b\4\b\c\5\g\1\9\v\s\y\3\4\1\t\6\a\5\0\w\1\5\8\0\v\9\h\w\h\d\v\e\b\4\3\l\w\2\c\h\i\f\1\u\j\c\l\m\y\y\l\p\5\i\o\b\j\5\g\p\t\z\f\s\q\f\i\c\f\d\i\o\k\v\q\z\h\m\e\y\o\i\k\8\h\r\h\2\e\4\l\c\c\c\l\b ]] 00:08:08.691 06:00:34 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:08:08.691 06:00:34 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:08:08.691 06:00:34 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:08:08.691 06:00:34 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:08:08.691 06:00:34 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:08.691 06:00:34 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:08:08.691 [2024-10-01 06:00:34.180308] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:08:08.691 [2024-10-01 06:00:34.180390] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72634 ] 00:08:08.954 [2024-10-01 06:00:34.311482] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:08.954 [2024-10-01 06:00:34.345047] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:08.954 [2024-10-01 06:00:34.378902] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:08.954  Copying: 512/512 [B] (average 500 kBps) 00:08:08.954 00:08:08.954 06:00:34 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ shs05xeyeww3m0jflzeljxxm3hovg1p096r7zg0e6jsbl7h815o7m8wtp7s3sd7jf6yh4fn3g2crhe8xhhqm46zder3zi6bpbqj69699qd31k2ncd7q47jp80za8xlb3zwj53ayupbu81i5v4w5v2vyq2wtc0yt7t5fk7k2v5jf0ycsx905jtib1a3889e8hey7gr4ibd575a6nh43f8r51tgyzf2eq8gzxxff0lquh3cgov93pqsnrwlpdq1zcz4drni677zl53u5h8yck11s2l5g4mhia0gso9t0c5qbdx77wq18cizhpsg0p49sokiek08657hjz93oef6wqt2er1ope5m6fu9c4r3nclogfjcwmv38uwkjv45rivuyv8ezekhf2tk8398xutldpc7sl6ou975wg4khdssi2vwr0qr0aqxeh2tn9oy5j11rht8vax1vw5lmywdab0bcwql60zep3um5575kt3hqmh743s9dw4mtsrpl7oig6xel0e == \s\h\s\0\5\x\e\y\e\w\w\3\m\0\j\f\l\z\e\l\j\x\x\m\3\h\o\v\g\1\p\0\9\6\r\7\z\g\0\e\6\j\s\b\l\7\h\8\1\5\o\7\m\8\w\t\p\7\s\3\s\d\7\j\f\6\y\h\4\f\n\3\g\2\c\r\h\e\8\x\h\h\q\m\4\6\z\d\e\r\3\z\i\6\b\p\b\q\j\6\9\6\9\9\q\d\3\1\k\2\n\c\d\7\q\4\7\j\p\8\0\z\a\8\x\l\b\3\z\w\j\5\3\a\y\u\p\b\u\8\1\i\5\v\4\w\5\v\2\v\y\q\2\w\t\c\0\y\t\7\t\5\f\k\7\k\2\v\5\j\f\0\y\c\s\x\9\0\5\j\t\i\b\1\a\3\8\8\9\e\8\h\e\y\7\g\r\4\i\b\d\5\7\5\a\6\n\h\4\3\f\8\r\5\1\t\g\y\z\f\2\e\q\8\g\z\x\x\f\f\0\l\q\u\h\3\c\g\o\v\9\3\p\q\s\n\r\w\l\p\d\q\1\z\c\z\4\d\r\n\i\6\7\7\z\l\5\3\u\5\h\8\y\c\k\1\1\s\2\l\5\g\4\m\h\i\a\0\g\s\o\9\t\0\c\5\q\b\d\x\7\7\w\q\1\8\c\i\z\h\p\s\g\0\p\4\9\s\o\k\i\e\k\0\8\6\5\7\h\j\z\9\3\o\e\f\6\w\q\t\2\e\r\1\o\p\e\5\m\6\f\u\9\c\4\r\3\n\c\l\o\g\f\j\c\w\m\v\3\8\u\w\k\j\v\4\5\r\i\v\u\y\v\8\e\z\e\k\h\f\2\t\k\8\3\9\8\x\u\t\l\d\p\c\7\s\l\6\o\u\9\7\5\w\g\4\k\h\d\s\s\i\2\v\w\r\0\q\r\0\a\q\x\e\h\2\t\n\9\o\y\5\j\1\1\r\h\t\8\v\a\x\1\v\w\5\l\m\y\w\d\a\b\0\b\c\w\q\l\6\0\z\e\p\3\u\m\5\5\7\5\k\t\3\h\q\m\h\7\4\3\s\9\d\w\4\m\t\s\r\p\l\7\o\i\g\6\x\e\l\0\e ]] 00:08:08.954 06:00:34 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:08.954 06:00:34 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:08:09.214 [2024-10-01 06:00:34.597308] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:08:09.214 [2024-10-01 06:00:34.597413] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72641 ] 00:08:09.214 [2024-10-01 06:00:34.729239] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:09.214 [2024-10-01 06:00:34.763234] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:09.214 [2024-10-01 06:00:34.794344] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:09.473  Copying: 512/512 [B] (average 500 kBps) 00:08:09.473 00:08:09.473 06:00:34 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ shs05xeyeww3m0jflzeljxxm3hovg1p096r7zg0e6jsbl7h815o7m8wtp7s3sd7jf6yh4fn3g2crhe8xhhqm46zder3zi6bpbqj69699qd31k2ncd7q47jp80za8xlb3zwj53ayupbu81i5v4w5v2vyq2wtc0yt7t5fk7k2v5jf0ycsx905jtib1a3889e8hey7gr4ibd575a6nh43f8r51tgyzf2eq8gzxxff0lquh3cgov93pqsnrwlpdq1zcz4drni677zl53u5h8yck11s2l5g4mhia0gso9t0c5qbdx77wq18cizhpsg0p49sokiek08657hjz93oef6wqt2er1ope5m6fu9c4r3nclogfjcwmv38uwkjv45rivuyv8ezekhf2tk8398xutldpc7sl6ou975wg4khdssi2vwr0qr0aqxeh2tn9oy5j11rht8vax1vw5lmywdab0bcwql60zep3um5575kt3hqmh743s9dw4mtsrpl7oig6xel0e == \s\h\s\0\5\x\e\y\e\w\w\3\m\0\j\f\l\z\e\l\j\x\x\m\3\h\o\v\g\1\p\0\9\6\r\7\z\g\0\e\6\j\s\b\l\7\h\8\1\5\o\7\m\8\w\t\p\7\s\3\s\d\7\j\f\6\y\h\4\f\n\3\g\2\c\r\h\e\8\x\h\h\q\m\4\6\z\d\e\r\3\z\i\6\b\p\b\q\j\6\9\6\9\9\q\d\3\1\k\2\n\c\d\7\q\4\7\j\p\8\0\z\a\8\x\l\b\3\z\w\j\5\3\a\y\u\p\b\u\8\1\i\5\v\4\w\5\v\2\v\y\q\2\w\t\c\0\y\t\7\t\5\f\k\7\k\2\v\5\j\f\0\y\c\s\x\9\0\5\j\t\i\b\1\a\3\8\8\9\e\8\h\e\y\7\g\r\4\i\b\d\5\7\5\a\6\n\h\4\3\f\8\r\5\1\t\g\y\z\f\2\e\q\8\g\z\x\x\f\f\0\l\q\u\h\3\c\g\o\v\9\3\p\q\s\n\r\w\l\p\d\q\1\z\c\z\4\d\r\n\i\6\7\7\z\l\5\3\u\5\h\8\y\c\k\1\1\s\2\l\5\g\4\m\h\i\a\0\g\s\o\9\t\0\c\5\q\b\d\x\7\7\w\q\1\8\c\i\z\h\p\s\g\0\p\4\9\s\o\k\i\e\k\0\8\6\5\7\h\j\z\9\3\o\e\f\6\w\q\t\2\e\r\1\o\p\e\5\m\6\f\u\9\c\4\r\3\n\c\l\o\g\f\j\c\w\m\v\3\8\u\w\k\j\v\4\5\r\i\v\u\y\v\8\e\z\e\k\h\f\2\t\k\8\3\9\8\x\u\t\l\d\p\c\7\s\l\6\o\u\9\7\5\w\g\4\k\h\d\s\s\i\2\v\w\r\0\q\r\0\a\q\x\e\h\2\t\n\9\o\y\5\j\1\1\r\h\t\8\v\a\x\1\v\w\5\l\m\y\w\d\a\b\0\b\c\w\q\l\6\0\z\e\p\3\u\m\5\5\7\5\k\t\3\h\q\m\h\7\4\3\s\9\d\w\4\m\t\s\r\p\l\7\o\i\g\6\x\e\l\0\e ]] 00:08:09.473 06:00:34 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:09.473 06:00:34 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:08:09.473 [2024-10-01 06:00:35.018277] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:08:09.473 [2024-10-01 06:00:35.018380] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72643 ] 00:08:09.731 [2024-10-01 06:00:35.149814] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:09.731 [2024-10-01 06:00:35.182527] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:09.731 [2024-10-01 06:00:35.212426] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:09.990  Copying: 512/512 [B] (average 166 kBps) 00:08:09.990 00:08:09.990 06:00:35 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ shs05xeyeww3m0jflzeljxxm3hovg1p096r7zg0e6jsbl7h815o7m8wtp7s3sd7jf6yh4fn3g2crhe8xhhqm46zder3zi6bpbqj69699qd31k2ncd7q47jp80za8xlb3zwj53ayupbu81i5v4w5v2vyq2wtc0yt7t5fk7k2v5jf0ycsx905jtib1a3889e8hey7gr4ibd575a6nh43f8r51tgyzf2eq8gzxxff0lquh3cgov93pqsnrwlpdq1zcz4drni677zl53u5h8yck11s2l5g4mhia0gso9t0c5qbdx77wq18cizhpsg0p49sokiek08657hjz93oef6wqt2er1ope5m6fu9c4r3nclogfjcwmv38uwkjv45rivuyv8ezekhf2tk8398xutldpc7sl6ou975wg4khdssi2vwr0qr0aqxeh2tn9oy5j11rht8vax1vw5lmywdab0bcwql60zep3um5575kt3hqmh743s9dw4mtsrpl7oig6xel0e == \s\h\s\0\5\x\e\y\e\w\w\3\m\0\j\f\l\z\e\l\j\x\x\m\3\h\o\v\g\1\p\0\9\6\r\7\z\g\0\e\6\j\s\b\l\7\h\8\1\5\o\7\m\8\w\t\p\7\s\3\s\d\7\j\f\6\y\h\4\f\n\3\g\2\c\r\h\e\8\x\h\h\q\m\4\6\z\d\e\r\3\z\i\6\b\p\b\q\j\6\9\6\9\9\q\d\3\1\k\2\n\c\d\7\q\4\7\j\p\8\0\z\a\8\x\l\b\3\z\w\j\5\3\a\y\u\p\b\u\8\1\i\5\v\4\w\5\v\2\v\y\q\2\w\t\c\0\y\t\7\t\5\f\k\7\k\2\v\5\j\f\0\y\c\s\x\9\0\5\j\t\i\b\1\a\3\8\8\9\e\8\h\e\y\7\g\r\4\i\b\d\5\7\5\a\6\n\h\4\3\f\8\r\5\1\t\g\y\z\f\2\e\q\8\g\z\x\x\f\f\0\l\q\u\h\3\c\g\o\v\9\3\p\q\s\n\r\w\l\p\d\q\1\z\c\z\4\d\r\n\i\6\7\7\z\l\5\3\u\5\h\8\y\c\k\1\1\s\2\l\5\g\4\m\h\i\a\0\g\s\o\9\t\0\c\5\q\b\d\x\7\7\w\q\1\8\c\i\z\h\p\s\g\0\p\4\9\s\o\k\i\e\k\0\8\6\5\7\h\j\z\9\3\o\e\f\6\w\q\t\2\e\r\1\o\p\e\5\m\6\f\u\9\c\4\r\3\n\c\l\o\g\f\j\c\w\m\v\3\8\u\w\k\j\v\4\5\r\i\v\u\y\v\8\e\z\e\k\h\f\2\t\k\8\3\9\8\x\u\t\l\d\p\c\7\s\l\6\o\u\9\7\5\w\g\4\k\h\d\s\s\i\2\v\w\r\0\q\r\0\a\q\x\e\h\2\t\n\9\o\y\5\j\1\1\r\h\t\8\v\a\x\1\v\w\5\l\m\y\w\d\a\b\0\b\c\w\q\l\6\0\z\e\p\3\u\m\5\5\7\5\k\t\3\h\q\m\h\7\4\3\s\9\d\w\4\m\t\s\r\p\l\7\o\i\g\6\x\e\l\0\e ]] 00:08:09.990 06:00:35 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:09.990 06:00:35 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:08:09.990 [2024-10-01 06:00:35.464001] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:08:09.990 [2024-10-01 06:00:35.464135] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72656 ] 00:08:09.990 [2024-10-01 06:00:35.600366] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:10.250 [2024-10-01 06:00:35.636706] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:10.250 [2024-10-01 06:00:35.664425] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:10.250  Copying: 512/512 [B] (average 500 kBps) 00:08:10.250 00:08:10.250 06:00:35 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ shs05xeyeww3m0jflzeljxxm3hovg1p096r7zg0e6jsbl7h815o7m8wtp7s3sd7jf6yh4fn3g2crhe8xhhqm46zder3zi6bpbqj69699qd31k2ncd7q47jp80za8xlb3zwj53ayupbu81i5v4w5v2vyq2wtc0yt7t5fk7k2v5jf0ycsx905jtib1a3889e8hey7gr4ibd575a6nh43f8r51tgyzf2eq8gzxxff0lquh3cgov93pqsnrwlpdq1zcz4drni677zl53u5h8yck11s2l5g4mhia0gso9t0c5qbdx77wq18cizhpsg0p49sokiek08657hjz93oef6wqt2er1ope5m6fu9c4r3nclogfjcwmv38uwkjv45rivuyv8ezekhf2tk8398xutldpc7sl6ou975wg4khdssi2vwr0qr0aqxeh2tn9oy5j11rht8vax1vw5lmywdab0bcwql60zep3um5575kt3hqmh743s9dw4mtsrpl7oig6xel0e == \s\h\s\0\5\x\e\y\e\w\w\3\m\0\j\f\l\z\e\l\j\x\x\m\3\h\o\v\g\1\p\0\9\6\r\7\z\g\0\e\6\j\s\b\l\7\h\8\1\5\o\7\m\8\w\t\p\7\s\3\s\d\7\j\f\6\y\h\4\f\n\3\g\2\c\r\h\e\8\x\h\h\q\m\4\6\z\d\e\r\3\z\i\6\b\p\b\q\j\6\9\6\9\9\q\d\3\1\k\2\n\c\d\7\q\4\7\j\p\8\0\z\a\8\x\l\b\3\z\w\j\5\3\a\y\u\p\b\u\8\1\i\5\v\4\w\5\v\2\v\y\q\2\w\t\c\0\y\t\7\t\5\f\k\7\k\2\v\5\j\f\0\y\c\s\x\9\0\5\j\t\i\b\1\a\3\8\8\9\e\8\h\e\y\7\g\r\4\i\b\d\5\7\5\a\6\n\h\4\3\f\8\r\5\1\t\g\y\z\f\2\e\q\8\g\z\x\x\f\f\0\l\q\u\h\3\c\g\o\v\9\3\p\q\s\n\r\w\l\p\d\q\1\z\c\z\4\d\r\n\i\6\7\7\z\l\5\3\u\5\h\8\y\c\k\1\1\s\2\l\5\g\4\m\h\i\a\0\g\s\o\9\t\0\c\5\q\b\d\x\7\7\w\q\1\8\c\i\z\h\p\s\g\0\p\4\9\s\o\k\i\e\k\0\8\6\5\7\h\j\z\9\3\o\e\f\6\w\q\t\2\e\r\1\o\p\e\5\m\6\f\u\9\c\4\r\3\n\c\l\o\g\f\j\c\w\m\v\3\8\u\w\k\j\v\4\5\r\i\v\u\y\v\8\e\z\e\k\h\f\2\t\k\8\3\9\8\x\u\t\l\d\p\c\7\s\l\6\o\u\9\7\5\w\g\4\k\h\d\s\s\i\2\v\w\r\0\q\r\0\a\q\x\e\h\2\t\n\9\o\y\5\j\1\1\r\h\t\8\v\a\x\1\v\w\5\l\m\y\w\d\a\b\0\b\c\w\q\l\6\0\z\e\p\3\u\m\5\5\7\5\k\t\3\h\q\m\h\7\4\3\s\9\d\w\4\m\t\s\r\p\l\7\o\i\g\6\x\e\l\0\e ]] 00:08:10.250 00:08:10.250 real 0m3.376s 00:08:10.250 user 0m1.631s 00:08:10.250 sys 0m0.774s 00:08:10.250 06:00:35 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:10.250 06:00:35 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:08:10.250 ************************************ 00:08:10.250 END TEST dd_flags_misc_forced_aio 00:08:10.250 ************************************ 00:08:10.510 06:00:35 spdk_dd.spdk_dd_posix -- dd/posix.sh@1 -- # cleanup 00:08:10.510 06:00:35 spdk_dd.spdk_dd_posix -- dd/posix.sh@11 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:08:10.510 06:00:35 spdk_dd.spdk_dd_posix -- dd/posix.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:08:10.510 00:08:10.510 real 0m15.836s 00:08:10.510 user 0m6.661s 00:08:10.510 sys 0m4.476s 00:08:10.510 06:00:35 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:10.510 06:00:35 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:08:10.510 ************************************ 00:08:10.510 END TEST spdk_dd_posix 00:08:10.510 ************************************ 00:08:10.510 06:00:35 spdk_dd -- dd/dd.sh@22 -- # run_test spdk_dd_malloc /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:08:10.510 06:00:35 spdk_dd -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:10.510 06:00:35 spdk_dd -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:10.510 06:00:35 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:08:10.510 ************************************ 00:08:10.510 START TEST spdk_dd_malloc 00:08:10.510 ************************************ 00:08:10.510 06:00:35 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:08:10.510 * Looking for test storage... 00:08:10.510 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:08:10.510 06:00:36 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:10.510 06:00:36 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1681 -- # lcov --version 00:08:10.510 06:00:36 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:10.510 06:00:36 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:10.510 06:00:36 spdk_dd.spdk_dd_malloc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:10.510 06:00:36 spdk_dd.spdk_dd_malloc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:10.510 06:00:36 spdk_dd.spdk_dd_malloc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:10.510 06:00:36 spdk_dd.spdk_dd_malloc -- scripts/common.sh@336 -- # IFS=.-: 00:08:10.510 06:00:36 spdk_dd.spdk_dd_malloc -- scripts/common.sh@336 -- # read -ra ver1 00:08:10.510 06:00:36 spdk_dd.spdk_dd_malloc -- scripts/common.sh@337 -- # IFS=.-: 00:08:10.510 06:00:36 spdk_dd.spdk_dd_malloc -- scripts/common.sh@337 -- # read -ra ver2 00:08:10.510 06:00:36 spdk_dd.spdk_dd_malloc -- scripts/common.sh@338 -- # local 'op=<' 00:08:10.510 06:00:36 spdk_dd.spdk_dd_malloc -- scripts/common.sh@340 -- # ver1_l=2 00:08:10.510 06:00:36 spdk_dd.spdk_dd_malloc -- scripts/common.sh@341 -- # ver2_l=1 00:08:10.510 06:00:36 spdk_dd.spdk_dd_malloc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:10.510 06:00:36 spdk_dd.spdk_dd_malloc -- scripts/common.sh@344 -- # case "$op" in 00:08:10.510 06:00:36 spdk_dd.spdk_dd_malloc -- scripts/common.sh@345 -- # : 1 00:08:10.510 06:00:36 spdk_dd.spdk_dd_malloc -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:10.510 06:00:36 spdk_dd.spdk_dd_malloc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:10.510 06:00:36 spdk_dd.spdk_dd_malloc -- scripts/common.sh@365 -- # decimal 1 00:08:10.510 06:00:36 spdk_dd.spdk_dd_malloc -- scripts/common.sh@353 -- # local d=1 00:08:10.510 06:00:36 spdk_dd.spdk_dd_malloc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:10.510 06:00:36 spdk_dd.spdk_dd_malloc -- scripts/common.sh@355 -- # echo 1 00:08:10.510 06:00:36 spdk_dd.spdk_dd_malloc -- scripts/common.sh@365 -- # ver1[v]=1 00:08:10.510 06:00:36 spdk_dd.spdk_dd_malloc -- scripts/common.sh@366 -- # decimal 2 00:08:10.510 06:00:36 spdk_dd.spdk_dd_malloc -- scripts/common.sh@353 -- # local d=2 00:08:10.510 06:00:36 spdk_dd.spdk_dd_malloc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:10.510 06:00:36 spdk_dd.spdk_dd_malloc -- scripts/common.sh@355 -- # echo 2 00:08:10.510 06:00:36 spdk_dd.spdk_dd_malloc -- scripts/common.sh@366 -- # ver2[v]=2 00:08:10.510 06:00:36 spdk_dd.spdk_dd_malloc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:10.510 06:00:36 spdk_dd.spdk_dd_malloc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:10.510 06:00:36 spdk_dd.spdk_dd_malloc -- scripts/common.sh@368 -- # return 0 00:08:10.510 06:00:36 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:10.510 06:00:36 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:10.510 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:10.510 --rc genhtml_branch_coverage=1 00:08:10.510 --rc genhtml_function_coverage=1 00:08:10.510 --rc genhtml_legend=1 00:08:10.510 --rc geninfo_all_blocks=1 00:08:10.510 --rc geninfo_unexecuted_blocks=1 00:08:10.510 00:08:10.510 ' 00:08:10.510 06:00:36 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:10.510 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:10.510 --rc genhtml_branch_coverage=1 00:08:10.510 --rc genhtml_function_coverage=1 00:08:10.510 --rc genhtml_legend=1 00:08:10.510 --rc geninfo_all_blocks=1 00:08:10.510 --rc geninfo_unexecuted_blocks=1 00:08:10.510 00:08:10.510 ' 00:08:10.510 06:00:36 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:10.510 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:10.510 --rc genhtml_branch_coverage=1 00:08:10.510 --rc genhtml_function_coverage=1 00:08:10.510 --rc genhtml_legend=1 00:08:10.510 --rc geninfo_all_blocks=1 00:08:10.510 --rc geninfo_unexecuted_blocks=1 00:08:10.510 00:08:10.510 ' 00:08:10.510 06:00:36 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:10.510 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:10.510 --rc genhtml_branch_coverage=1 00:08:10.510 --rc genhtml_function_coverage=1 00:08:10.510 --rc genhtml_legend=1 00:08:10.510 --rc geninfo_all_blocks=1 00:08:10.510 --rc geninfo_unexecuted_blocks=1 00:08:10.510 00:08:10.510 ' 00:08:10.510 06:00:36 spdk_dd.spdk_dd_malloc -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:10.510 06:00:36 spdk_dd.spdk_dd_malloc -- scripts/common.sh@15 -- # shopt -s extglob 00:08:10.510 06:00:36 spdk_dd.spdk_dd_malloc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:10.510 06:00:36 spdk_dd.spdk_dd_malloc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:10.510 06:00:36 spdk_dd.spdk_dd_malloc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:10.511 06:00:36 spdk_dd.spdk_dd_malloc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:10.511 06:00:36 spdk_dd.spdk_dd_malloc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:10.511 06:00:36 spdk_dd.spdk_dd_malloc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:10.511 06:00:36 spdk_dd.spdk_dd_malloc -- paths/export.sh@5 -- # export PATH 00:08:10.511 06:00:36 spdk_dd.spdk_dd_malloc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:10.511 06:00:36 spdk_dd.spdk_dd_malloc -- dd/malloc.sh@38 -- # run_test dd_malloc_copy malloc_copy 00:08:10.511 06:00:36 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:10.511 06:00:36 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:10.511 06:00:36 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:08:10.770 ************************************ 00:08:10.770 START TEST dd_malloc_copy 00:08:10.770 ************************************ 00:08:10.770 06:00:36 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1125 -- # malloc_copy 00:08:10.770 06:00:36 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@12 -- # local mbdev0=malloc0 mbdev0_b=1048576 mbdev0_bs=512 00:08:10.770 06:00:36 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@13 -- # local mbdev1=malloc1 mbdev1_b=1048576 mbdev1_bs=512 00:08:10.770 06:00:36 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:08:10.770 06:00:36 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # local -A method_bdev_malloc_create_0 00:08:10.770 06:00:36 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='1048576' ['block_size']='512') 00:08:10.770 06:00:36 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # local -A method_bdev_malloc_create_1 00:08:10.770 06:00:36 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --json /dev/fd/62 00:08:10.770 06:00:36 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # gen_conf 00:08:10.770 06:00:36 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:08:10.770 06:00:36 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:08:10.770 [2024-10-01 06:00:36.180623] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:08:10.770 [2024-10-01 06:00:36.181211] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72727 ] 00:08:10.770 { 00:08:10.770 "subsystems": [ 00:08:10.770 { 00:08:10.770 "subsystem": "bdev", 00:08:10.770 "config": [ 00:08:10.770 { 00:08:10.770 "params": { 00:08:10.770 "block_size": 512, 00:08:10.770 "num_blocks": 1048576, 00:08:10.770 "name": "malloc0" 00:08:10.770 }, 00:08:10.770 "method": "bdev_malloc_create" 00:08:10.770 }, 00:08:10.770 { 00:08:10.770 "params": { 00:08:10.770 "block_size": 512, 00:08:10.770 "num_blocks": 1048576, 00:08:10.770 "name": "malloc1" 00:08:10.770 }, 00:08:10.770 "method": "bdev_malloc_create" 00:08:10.770 }, 00:08:10.770 { 00:08:10.770 "method": "bdev_wait_for_examine" 00:08:10.770 } 00:08:10.770 ] 00:08:10.770 } 00:08:10.770 ] 00:08:10.770 } 00:08:10.770 [2024-10-01 06:00:36.316060] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:10.770 [2024-10-01 06:00:36.348890] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:10.770 [2024-10-01 06:00:36.377073] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:13.601  Copying: 239/512 [MB] (239 MBps) Copying: 477/512 [MB] (237 MBps) Copying: 512/512 [MB] (average 236 MBps) 00:08:13.601 00:08:13.601 06:00:39 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc1 --ob=malloc0 --json /dev/fd/62 00:08:13.602 06:00:39 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # gen_conf 00:08:13.602 06:00:39 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:08:13.602 06:00:39 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:08:13.602 [2024-10-01 06:00:39.122084] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:08:13.602 [2024-10-01 06:00:39.122194] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72769 ] 00:08:13.602 { 00:08:13.602 "subsystems": [ 00:08:13.602 { 00:08:13.602 "subsystem": "bdev", 00:08:13.602 "config": [ 00:08:13.602 { 00:08:13.602 "params": { 00:08:13.602 "block_size": 512, 00:08:13.602 "num_blocks": 1048576, 00:08:13.602 "name": "malloc0" 00:08:13.602 }, 00:08:13.602 "method": "bdev_malloc_create" 00:08:13.602 }, 00:08:13.602 { 00:08:13.602 "params": { 00:08:13.602 "block_size": 512, 00:08:13.602 "num_blocks": 1048576, 00:08:13.602 "name": "malloc1" 00:08:13.602 }, 00:08:13.602 "method": "bdev_malloc_create" 00:08:13.602 }, 00:08:13.602 { 00:08:13.602 "method": "bdev_wait_for_examine" 00:08:13.602 } 00:08:13.602 ] 00:08:13.602 } 00:08:13.602 ] 00:08:13.602 } 00:08:13.861 [2024-10-01 06:00:39.259422] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:13.861 [2024-10-01 06:00:39.292667] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:13.861 [2024-10-01 06:00:39.320323] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:16.432  Copying: 236/512 [MB] (236 MBps) Copying: 476/512 [MB] (240 MBps) Copying: 512/512 [MB] (average 238 MBps) 00:08:16.432 00:08:16.432 00:08:16.432 real 0m5.865s 00:08:16.432 user 0m5.207s 00:08:16.432 sys 0m0.507s 00:08:16.432 06:00:41 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:16.432 06:00:41 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:08:16.432 ************************************ 00:08:16.432 END TEST dd_malloc_copy 00:08:16.432 ************************************ 00:08:16.432 00:08:16.432 real 0m6.104s 00:08:16.432 user 0m5.340s 00:08:16.432 sys 0m0.618s 00:08:16.432 06:00:42 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:16.432 ************************************ 00:08:16.432 06:00:42 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:08:16.432 END TEST spdk_dd_malloc 00:08:16.432 ************************************ 00:08:16.691 06:00:42 spdk_dd -- dd/dd.sh@23 -- # run_test spdk_dd_bdev_to_bdev /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 0000:00:11.0 00:08:16.691 06:00:42 spdk_dd -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:08:16.691 06:00:42 spdk_dd -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:16.691 06:00:42 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:08:16.691 ************************************ 00:08:16.691 START TEST spdk_dd_bdev_to_bdev 00:08:16.691 ************************************ 00:08:16.691 06:00:42 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 0000:00:11.0 00:08:16.691 * Looking for test storage... 00:08:16.691 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:08:16.691 06:00:42 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:16.691 06:00:42 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1681 -- # lcov --version 00:08:16.691 06:00:42 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:16.691 06:00:42 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:16.691 06:00:42 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:16.691 06:00:42 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:16.691 06:00:42 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:16.691 06:00:42 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@336 -- # IFS=.-: 00:08:16.691 06:00:42 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@336 -- # read -ra ver1 00:08:16.691 06:00:42 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@337 -- # IFS=.-: 00:08:16.691 06:00:42 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@337 -- # read -ra ver2 00:08:16.691 06:00:42 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@338 -- # local 'op=<' 00:08:16.691 06:00:42 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@340 -- # ver1_l=2 00:08:16.691 06:00:42 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@341 -- # ver2_l=1 00:08:16.691 06:00:42 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:16.691 06:00:42 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@344 -- # case "$op" in 00:08:16.691 06:00:42 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@345 -- # : 1 00:08:16.691 06:00:42 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:16.691 06:00:42 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:16.691 06:00:42 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@365 -- # decimal 1 00:08:16.691 06:00:42 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@353 -- # local d=1 00:08:16.691 06:00:42 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:16.691 06:00:42 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@355 -- # echo 1 00:08:16.691 06:00:42 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@365 -- # ver1[v]=1 00:08:16.691 06:00:42 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@366 -- # decimal 2 00:08:16.691 06:00:42 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@353 -- # local d=2 00:08:16.691 06:00:42 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:16.691 06:00:42 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@355 -- # echo 2 00:08:16.691 06:00:42 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@366 -- # ver2[v]=2 00:08:16.691 06:00:42 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:16.691 06:00:42 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:16.691 06:00:42 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@368 -- # return 0 00:08:16.691 06:00:42 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:16.691 06:00:42 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:16.691 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:16.691 --rc genhtml_branch_coverage=1 00:08:16.691 --rc genhtml_function_coverage=1 00:08:16.691 --rc genhtml_legend=1 00:08:16.691 --rc geninfo_all_blocks=1 00:08:16.691 --rc geninfo_unexecuted_blocks=1 00:08:16.691 00:08:16.691 ' 00:08:16.691 06:00:42 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:16.691 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:16.691 --rc genhtml_branch_coverage=1 00:08:16.691 --rc genhtml_function_coverage=1 00:08:16.691 --rc genhtml_legend=1 00:08:16.691 --rc geninfo_all_blocks=1 00:08:16.691 --rc geninfo_unexecuted_blocks=1 00:08:16.691 00:08:16.691 ' 00:08:16.691 06:00:42 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:16.691 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:16.691 --rc genhtml_branch_coverage=1 00:08:16.691 --rc genhtml_function_coverage=1 00:08:16.691 --rc genhtml_legend=1 00:08:16.691 --rc geninfo_all_blocks=1 00:08:16.691 --rc geninfo_unexecuted_blocks=1 00:08:16.691 00:08:16.691 ' 00:08:16.691 06:00:42 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:16.691 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:16.691 --rc genhtml_branch_coverage=1 00:08:16.691 --rc genhtml_function_coverage=1 00:08:16.691 --rc genhtml_legend=1 00:08:16.691 --rc geninfo_all_blocks=1 00:08:16.691 --rc geninfo_unexecuted_blocks=1 00:08:16.691 00:08:16.691 ' 00:08:16.692 06:00:42 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:16.692 06:00:42 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@15 -- # shopt -s extglob 00:08:16.692 06:00:42 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:16.692 06:00:42 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:16.692 06:00:42 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:16.692 06:00:42 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:16.692 06:00:42 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:16.692 06:00:42 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:16.692 06:00:42 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@5 -- # export PATH 00:08:16.692 06:00:42 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:16.692 06:00:42 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@10 -- # nvmes=("$@") 00:08:16.692 06:00:42 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@47 -- # trap cleanup EXIT 00:08:16.692 06:00:42 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@49 -- # bs=1048576 00:08:16.692 06:00:42 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@51 -- # (( 2 > 1 )) 00:08:16.692 06:00:42 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # nvme0=Nvme0 00:08:16.692 06:00:42 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # bdev0=Nvme0n1 00:08:16.692 06:00:42 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # nvme0_pci=0000:00:10.0 00:08:16.692 06:00:42 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # nvme1=Nvme1 00:08:16.692 06:00:42 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # bdev1=Nvme1n1 00:08:16.692 06:00:42 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # nvme1_pci=0000:00:11.0 00:08:16.692 06:00:42 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@55 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:08:16.692 06:00:42 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@55 -- # declare -A method_bdev_nvme_attach_controller_0 00:08:16.692 06:00:42 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@60 -- # method_bdev_nvme_attach_controller_1=(['name']='Nvme1' ['traddr']='0000:00:11.0' ['trtype']='pcie') 00:08:16.692 06:00:42 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@60 -- # declare -A method_bdev_nvme_attach_controller_1 00:08:16.692 06:00:42 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@89 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:16.692 06:00:42 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@90 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:16.692 06:00:42 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@92 -- # magic='This Is Our Magic, find it' 00:08:16.692 06:00:42 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@93 -- # echo 'This Is Our Magic, find it' 00:08:16.692 06:00:42 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@96 -- # run_test dd_inflate_file /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:08:16.692 06:00:42 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:08:16.692 06:00:42 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:16.692 06:00:42 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:08:16.692 ************************************ 00:08:16.692 START TEST dd_inflate_file 00:08:16.692 ************************************ 00:08:16.692 06:00:42 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:08:16.951 [2024-10-01 06:00:42.332009] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:08:16.951 [2024-10-01 06:00:42.332119] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72881 ] 00:08:16.951 [2024-10-01 06:00:42.467831] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:16.951 [2024-10-01 06:00:42.500300] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:16.951 [2024-10-01 06:00:42.527549] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:17.210  Copying: 64/64 [MB] (average 1523 MBps) 00:08:17.210 00:08:17.210 00:08:17.210 real 0m0.444s 00:08:17.210 user 0m0.244s 00:08:17.210 sys 0m0.218s 00:08:17.210 06:00:42 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:17.210 ************************************ 00:08:17.210 END TEST dd_inflate_file 00:08:17.210 06:00:42 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@10 -- # set +x 00:08:17.210 ************************************ 00:08:17.210 06:00:42 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # wc -c 00:08:17.210 06:00:42 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # test_file0_size=67108891 00:08:17.210 06:00:42 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # run_test dd_copy_to_out_bdev /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:08:17.210 06:00:42 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # gen_conf 00:08:17.210 06:00:42 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:08:17.210 06:00:42 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:08:17.210 06:00:42 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:08:17.210 06:00:42 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:17.210 06:00:42 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:08:17.210 ************************************ 00:08:17.210 START TEST dd_copy_to_out_bdev 00:08:17.210 ************************************ 00:08:17.210 06:00:42 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:08:17.468 { 00:08:17.468 "subsystems": [ 00:08:17.468 { 00:08:17.468 "subsystem": "bdev", 00:08:17.468 "config": [ 00:08:17.468 { 00:08:17.468 "params": { 00:08:17.468 "trtype": "pcie", 00:08:17.468 "traddr": "0000:00:10.0", 00:08:17.468 "name": "Nvme0" 00:08:17.468 }, 00:08:17.468 "method": "bdev_nvme_attach_controller" 00:08:17.468 }, 00:08:17.468 { 00:08:17.468 "params": { 00:08:17.468 "trtype": "pcie", 00:08:17.468 "traddr": "0000:00:11.0", 00:08:17.468 "name": "Nvme1" 00:08:17.468 }, 00:08:17.468 "method": "bdev_nvme_attach_controller" 00:08:17.468 }, 00:08:17.468 { 00:08:17.468 "method": "bdev_wait_for_examine" 00:08:17.468 } 00:08:17.468 ] 00:08:17.468 } 00:08:17.468 ] 00:08:17.468 } 00:08:17.468 [2024-10-01 06:00:42.833812] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:08:17.468 [2024-10-01 06:00:42.833933] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72917 ] 00:08:17.468 [2024-10-01 06:00:42.965102] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:17.468 [2024-10-01 06:00:43.000460] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:17.468 [2024-10-01 06:00:43.028724] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:19.117  Copying: 52/64 [MB] (52 MBps) Copying: 64/64 [MB] (average 52 MBps) 00:08:19.117 00:08:19.117 00:08:19.117 real 0m1.803s 00:08:19.117 user 0m1.626s 00:08:19.117 sys 0m1.455s 00:08:19.117 06:00:44 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:19.117 06:00:44 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@10 -- # set +x 00:08:19.117 ************************************ 00:08:19.117 END TEST dd_copy_to_out_bdev 00:08:19.117 ************************************ 00:08:19.117 06:00:44 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@113 -- # count=65 00:08:19.117 06:00:44 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@115 -- # run_test dd_offset_magic offset_magic 00:08:19.117 06:00:44 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:19.117 06:00:44 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:19.117 06:00:44 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:08:19.117 ************************************ 00:08:19.117 START TEST dd_offset_magic 00:08:19.117 ************************************ 00:08:19.117 06:00:44 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1125 -- # offset_magic 00:08:19.117 06:00:44 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@13 -- # local magic_check 00:08:19.117 06:00:44 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@14 -- # local offsets offset 00:08:19.117 06:00:44 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@16 -- # offsets=(16 64) 00:08:19.117 06:00:44 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:08:19.117 06:00:44 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=16 --bs=1048576 --json /dev/fd/62 00:08:19.117 06:00:44 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:08:19.117 06:00:44 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:08:19.117 06:00:44 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:08:19.117 [2024-10-01 06:00:44.688655] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:08:19.117 [2024-10-01 06:00:44.688742] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72956 ] 00:08:19.117 { 00:08:19.117 "subsystems": [ 00:08:19.117 { 00:08:19.117 "subsystem": "bdev", 00:08:19.117 "config": [ 00:08:19.117 { 00:08:19.117 "params": { 00:08:19.117 "trtype": "pcie", 00:08:19.117 "traddr": "0000:00:10.0", 00:08:19.117 "name": "Nvme0" 00:08:19.117 }, 00:08:19.117 "method": "bdev_nvme_attach_controller" 00:08:19.117 }, 00:08:19.117 { 00:08:19.117 "params": { 00:08:19.117 "trtype": "pcie", 00:08:19.117 "traddr": "0000:00:11.0", 00:08:19.117 "name": "Nvme1" 00:08:19.117 }, 00:08:19.117 "method": "bdev_nvme_attach_controller" 00:08:19.117 }, 00:08:19.117 { 00:08:19.117 "method": "bdev_wait_for_examine" 00:08:19.117 } 00:08:19.117 ] 00:08:19.117 } 00:08:19.117 ] 00:08:19.117 } 00:08:19.375 [2024-10-01 06:00:44.823755] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:19.375 [2024-10-01 06:00:44.856354] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:19.375 [2024-10-01 06:00:44.885662] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:19.893  Copying: 65/65 [MB] (average 942 MBps) 00:08:19.893 00:08:19.893 06:00:45 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:08:19.893 06:00:45 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=16 --bs=1048576 --json /dev/fd/62 00:08:19.893 06:00:45 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:08:19.893 06:00:45 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:08:19.893 [2024-10-01 06:00:45.331657] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:08:19.893 [2024-10-01 06:00:45.332317] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72970 ] 00:08:19.893 { 00:08:19.893 "subsystems": [ 00:08:19.893 { 00:08:19.893 "subsystem": "bdev", 00:08:19.893 "config": [ 00:08:19.893 { 00:08:19.893 "params": { 00:08:19.893 "trtype": "pcie", 00:08:19.893 "traddr": "0000:00:10.0", 00:08:19.893 "name": "Nvme0" 00:08:19.893 }, 00:08:19.893 "method": "bdev_nvme_attach_controller" 00:08:19.893 }, 00:08:19.893 { 00:08:19.893 "params": { 00:08:19.893 "trtype": "pcie", 00:08:19.893 "traddr": "0000:00:11.0", 00:08:19.893 "name": "Nvme1" 00:08:19.893 }, 00:08:19.893 "method": "bdev_nvme_attach_controller" 00:08:19.893 }, 00:08:19.893 { 00:08:19.893 "method": "bdev_wait_for_examine" 00:08:19.893 } 00:08:19.893 ] 00:08:19.893 } 00:08:19.893 ] 00:08:19.893 } 00:08:19.893 [2024-10-01 06:00:45.467240] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:19.893 [2024-10-01 06:00:45.499986] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:20.152 [2024-10-01 06:00:45.529233] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:20.411  Copying: 1024/1024 [kB] (average 1000 MBps) 00:08:20.411 00:08:20.411 06:00:45 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:08:20.411 06:00:45 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:08:20.411 06:00:45 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:08:20.411 06:00:45 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:08:20.411 06:00:45 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=64 --bs=1048576 --json /dev/fd/62 00:08:20.411 06:00:45 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:08:20.411 06:00:45 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:08:20.411 [2024-10-01 06:00:45.871780] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:08:20.411 [2024-10-01 06:00:45.871873] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72992 ] 00:08:20.411 { 00:08:20.411 "subsystems": [ 00:08:20.411 { 00:08:20.411 "subsystem": "bdev", 00:08:20.411 "config": [ 00:08:20.411 { 00:08:20.411 "params": { 00:08:20.411 "trtype": "pcie", 00:08:20.411 "traddr": "0000:00:10.0", 00:08:20.411 "name": "Nvme0" 00:08:20.411 }, 00:08:20.411 "method": "bdev_nvme_attach_controller" 00:08:20.411 }, 00:08:20.411 { 00:08:20.411 "params": { 00:08:20.411 "trtype": "pcie", 00:08:20.411 "traddr": "0000:00:11.0", 00:08:20.411 "name": "Nvme1" 00:08:20.411 }, 00:08:20.411 "method": "bdev_nvme_attach_controller" 00:08:20.411 }, 00:08:20.411 { 00:08:20.411 "method": "bdev_wait_for_examine" 00:08:20.411 } 00:08:20.411 ] 00:08:20.411 } 00:08:20.411 ] 00:08:20.411 } 00:08:20.411 [2024-10-01 06:00:46.008631] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:20.670 [2024-10-01 06:00:46.048404] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:20.670 [2024-10-01 06:00:46.076413] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:20.928  Copying: 65/65 [MB] (average 1015 MBps) 00:08:20.928 00:08:20.928 06:00:46 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:08:20.928 06:00:46 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=64 --bs=1048576 --json /dev/fd/62 00:08:20.928 06:00:46 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:08:20.928 06:00:46 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:08:20.928 [2024-10-01 06:00:46.538188] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:08:20.928 [2024-10-01 06:00:46.538278] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73007 ] 00:08:21.186 { 00:08:21.186 "subsystems": [ 00:08:21.186 { 00:08:21.186 "subsystem": "bdev", 00:08:21.186 "config": [ 00:08:21.186 { 00:08:21.186 "params": { 00:08:21.186 "trtype": "pcie", 00:08:21.186 "traddr": "0000:00:10.0", 00:08:21.186 "name": "Nvme0" 00:08:21.186 }, 00:08:21.186 "method": "bdev_nvme_attach_controller" 00:08:21.186 }, 00:08:21.186 { 00:08:21.186 "params": { 00:08:21.186 "trtype": "pcie", 00:08:21.186 "traddr": "0000:00:11.0", 00:08:21.186 "name": "Nvme1" 00:08:21.186 }, 00:08:21.186 "method": "bdev_nvme_attach_controller" 00:08:21.186 }, 00:08:21.186 { 00:08:21.186 "method": "bdev_wait_for_examine" 00:08:21.186 } 00:08:21.186 ] 00:08:21.186 } 00:08:21.186 ] 00:08:21.186 } 00:08:21.186 [2024-10-01 06:00:46.675368] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:21.186 [2024-10-01 06:00:46.707488] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:21.186 [2024-10-01 06:00:46.735808] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:21.444  Copying: 1024/1024 [kB] (average 1000 MBps) 00:08:21.444 00:08:21.444 06:00:47 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:08:21.444 06:00:47 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:08:21.444 00:08:21.444 real 0m2.397s 00:08:21.444 user 0m1.752s 00:08:21.444 sys 0m0.628s 00:08:21.444 06:00:47 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:21.444 ************************************ 00:08:21.444 END TEST dd_offset_magic 00:08:21.444 ************************************ 00:08:21.444 06:00:47 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:08:21.703 06:00:47 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@1 -- # cleanup 00:08:21.703 06:00:47 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@42 -- # clear_nvme Nvme0n1 '' 4194330 00:08:21.703 06:00:47 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:08:21.703 06:00:47 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:08:21.703 06:00:47 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:08:21.703 06:00:47 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:08:21.703 06:00:47 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:08:21.703 06:00:47 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=5 --json /dev/fd/62 00:08:21.703 06:00:47 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:08:21.703 06:00:47 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:08:21.703 06:00:47 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:08:21.703 [2024-10-01 06:00:47.122653] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:08:21.703 [2024-10-01 06:00:47.122722] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73038 ] 00:08:21.703 { 00:08:21.703 "subsystems": [ 00:08:21.703 { 00:08:21.703 "subsystem": "bdev", 00:08:21.703 "config": [ 00:08:21.703 { 00:08:21.703 "params": { 00:08:21.703 "trtype": "pcie", 00:08:21.703 "traddr": "0000:00:10.0", 00:08:21.703 "name": "Nvme0" 00:08:21.703 }, 00:08:21.703 "method": "bdev_nvme_attach_controller" 00:08:21.703 }, 00:08:21.703 { 00:08:21.703 "params": { 00:08:21.703 "trtype": "pcie", 00:08:21.703 "traddr": "0000:00:11.0", 00:08:21.703 "name": "Nvme1" 00:08:21.703 }, 00:08:21.703 "method": "bdev_nvme_attach_controller" 00:08:21.703 }, 00:08:21.703 { 00:08:21.703 "method": "bdev_wait_for_examine" 00:08:21.703 } 00:08:21.703 ] 00:08:21.703 } 00:08:21.703 ] 00:08:21.703 } 00:08:21.703 [2024-10-01 06:00:47.252654] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:21.703 [2024-10-01 06:00:47.286233] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:21.703 [2024-10-01 06:00:47.315273] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:22.221  Copying: 5120/5120 [kB] (average 1666 MBps) 00:08:22.221 00:08:22.221 06:00:47 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@43 -- # clear_nvme Nvme1n1 '' 4194330 00:08:22.221 06:00:47 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=Nvme1n1 00:08:22.221 06:00:47 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:08:22.221 06:00:47 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:08:22.221 06:00:47 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:08:22.221 06:00:47 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:08:22.221 06:00:47 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme1n1 --count=5 --json /dev/fd/62 00:08:22.221 06:00:47 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:08:22.221 06:00:47 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:08:22.221 06:00:47 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:08:22.221 [2024-10-01 06:00:47.664639] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:08:22.221 [2024-10-01 06:00:47.664740] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73054 ] 00:08:22.221 { 00:08:22.221 "subsystems": [ 00:08:22.221 { 00:08:22.221 "subsystem": "bdev", 00:08:22.221 "config": [ 00:08:22.221 { 00:08:22.221 "params": { 00:08:22.221 "trtype": "pcie", 00:08:22.221 "traddr": "0000:00:10.0", 00:08:22.221 "name": "Nvme0" 00:08:22.221 }, 00:08:22.221 "method": "bdev_nvme_attach_controller" 00:08:22.221 }, 00:08:22.221 { 00:08:22.221 "params": { 00:08:22.221 "trtype": "pcie", 00:08:22.221 "traddr": "0000:00:11.0", 00:08:22.221 "name": "Nvme1" 00:08:22.221 }, 00:08:22.221 "method": "bdev_nvme_attach_controller" 00:08:22.221 }, 00:08:22.221 { 00:08:22.221 "method": "bdev_wait_for_examine" 00:08:22.221 } 00:08:22.221 ] 00:08:22.221 } 00:08:22.221 ] 00:08:22.221 } 00:08:22.221 [2024-10-01 06:00:47.796011] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:22.221 [2024-10-01 06:00:47.833639] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:22.479 [2024-10-01 06:00:47.863362] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:22.738  Copying: 5120/5120 [kB] (average 833 MBps) 00:08:22.738 00:08:22.738 06:00:48 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@44 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 '' 00:08:22.738 00:08:22.738 real 0m6.104s 00:08:22.738 user 0m4.551s 00:08:22.738 sys 0m2.861s 00:08:22.738 06:00:48 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:22.738 ************************************ 00:08:22.738 06:00:48 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:08:22.738 END TEST spdk_dd_bdev_to_bdev 00:08:22.738 ************************************ 00:08:22.738 06:00:48 spdk_dd -- dd/dd.sh@24 -- # (( SPDK_TEST_URING == 1 )) 00:08:22.738 06:00:48 spdk_dd -- dd/dd.sh@25 -- # run_test spdk_dd_uring /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:08:22.738 06:00:48 spdk_dd -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:22.738 06:00:48 spdk_dd -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:22.738 06:00:48 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:08:22.738 ************************************ 00:08:22.738 START TEST spdk_dd_uring 00:08:22.738 ************************************ 00:08:22.738 06:00:48 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:08:22.738 * Looking for test storage... 00:08:22.738 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:08:22.738 06:00:48 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:22.738 06:00:48 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1681 -- # lcov --version 00:08:22.738 06:00:48 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:22.997 06:00:48 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:22.997 06:00:48 spdk_dd.spdk_dd_uring -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:22.997 06:00:48 spdk_dd.spdk_dd_uring -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:22.997 06:00:48 spdk_dd.spdk_dd_uring -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:22.997 06:00:48 spdk_dd.spdk_dd_uring -- scripts/common.sh@336 -- # IFS=.-: 00:08:22.997 06:00:48 spdk_dd.spdk_dd_uring -- scripts/common.sh@336 -- # read -ra ver1 00:08:22.997 06:00:48 spdk_dd.spdk_dd_uring -- scripts/common.sh@337 -- # IFS=.-: 00:08:22.997 06:00:48 spdk_dd.spdk_dd_uring -- scripts/common.sh@337 -- # read -ra ver2 00:08:22.997 06:00:48 spdk_dd.spdk_dd_uring -- scripts/common.sh@338 -- # local 'op=<' 00:08:22.997 06:00:48 spdk_dd.spdk_dd_uring -- scripts/common.sh@340 -- # ver1_l=2 00:08:22.997 06:00:48 spdk_dd.spdk_dd_uring -- scripts/common.sh@341 -- # ver2_l=1 00:08:22.997 06:00:48 spdk_dd.spdk_dd_uring -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:22.997 06:00:48 spdk_dd.spdk_dd_uring -- scripts/common.sh@344 -- # case "$op" in 00:08:22.997 06:00:48 spdk_dd.spdk_dd_uring -- scripts/common.sh@345 -- # : 1 00:08:22.997 06:00:48 spdk_dd.spdk_dd_uring -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:22.997 06:00:48 spdk_dd.spdk_dd_uring -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:22.997 06:00:48 spdk_dd.spdk_dd_uring -- scripts/common.sh@365 -- # decimal 1 00:08:22.997 06:00:48 spdk_dd.spdk_dd_uring -- scripts/common.sh@353 -- # local d=1 00:08:22.997 06:00:48 spdk_dd.spdk_dd_uring -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:22.997 06:00:48 spdk_dd.spdk_dd_uring -- scripts/common.sh@355 -- # echo 1 00:08:22.997 06:00:48 spdk_dd.spdk_dd_uring -- scripts/common.sh@365 -- # ver1[v]=1 00:08:22.997 06:00:48 spdk_dd.spdk_dd_uring -- scripts/common.sh@366 -- # decimal 2 00:08:22.997 06:00:48 spdk_dd.spdk_dd_uring -- scripts/common.sh@353 -- # local d=2 00:08:22.997 06:00:48 spdk_dd.spdk_dd_uring -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:22.997 06:00:48 spdk_dd.spdk_dd_uring -- scripts/common.sh@355 -- # echo 2 00:08:22.997 06:00:48 spdk_dd.spdk_dd_uring -- scripts/common.sh@366 -- # ver2[v]=2 00:08:22.997 06:00:48 spdk_dd.spdk_dd_uring -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:22.997 06:00:48 spdk_dd.spdk_dd_uring -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:22.997 06:00:48 spdk_dd.spdk_dd_uring -- scripts/common.sh@368 -- # return 0 00:08:22.997 06:00:48 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:22.997 06:00:48 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:22.997 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:22.997 --rc genhtml_branch_coverage=1 00:08:22.997 --rc genhtml_function_coverage=1 00:08:22.997 --rc genhtml_legend=1 00:08:22.997 --rc geninfo_all_blocks=1 00:08:22.997 --rc geninfo_unexecuted_blocks=1 00:08:22.997 00:08:22.997 ' 00:08:22.997 06:00:48 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:22.997 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:22.997 --rc genhtml_branch_coverage=1 00:08:22.997 --rc genhtml_function_coverage=1 00:08:22.997 --rc genhtml_legend=1 00:08:22.997 --rc geninfo_all_blocks=1 00:08:22.997 --rc geninfo_unexecuted_blocks=1 00:08:22.997 00:08:22.997 ' 00:08:22.997 06:00:48 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:22.997 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:22.997 --rc genhtml_branch_coverage=1 00:08:22.997 --rc genhtml_function_coverage=1 00:08:22.997 --rc genhtml_legend=1 00:08:22.997 --rc geninfo_all_blocks=1 00:08:22.997 --rc geninfo_unexecuted_blocks=1 00:08:22.997 00:08:22.997 ' 00:08:22.997 06:00:48 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:22.997 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:22.997 --rc genhtml_branch_coverage=1 00:08:22.997 --rc genhtml_function_coverage=1 00:08:22.997 --rc genhtml_legend=1 00:08:22.997 --rc geninfo_all_blocks=1 00:08:22.997 --rc geninfo_unexecuted_blocks=1 00:08:22.997 00:08:22.997 ' 00:08:22.997 06:00:48 spdk_dd.spdk_dd_uring -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:22.997 06:00:48 spdk_dd.spdk_dd_uring -- scripts/common.sh@15 -- # shopt -s extglob 00:08:22.997 06:00:48 spdk_dd.spdk_dd_uring -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:22.997 06:00:48 spdk_dd.spdk_dd_uring -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:22.997 06:00:48 spdk_dd.spdk_dd_uring -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:22.997 06:00:48 spdk_dd.spdk_dd_uring -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:22.997 06:00:48 spdk_dd.spdk_dd_uring -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:22.997 06:00:48 spdk_dd.spdk_dd_uring -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:22.997 06:00:48 spdk_dd.spdk_dd_uring -- paths/export.sh@5 -- # export PATH 00:08:22.998 06:00:48 spdk_dd.spdk_dd_uring -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:22.998 06:00:48 spdk_dd.spdk_dd_uring -- dd/uring.sh@103 -- # run_test dd_uring_copy uring_zram_copy 00:08:22.998 06:00:48 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:22.998 06:00:48 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:22.998 06:00:48 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@10 -- # set +x 00:08:22.998 ************************************ 00:08:22.998 START TEST dd_uring_copy 00:08:22.998 ************************************ 00:08:22.998 06:00:48 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@1125 -- # uring_zram_copy 00:08:22.998 06:00:48 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@15 -- # local zram_dev_id 00:08:22.998 06:00:48 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@16 -- # local magic 00:08:22.998 06:00:48 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@17 -- # local magic_file0=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 00:08:22.998 06:00:48 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@18 -- # local magic_file1=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:08:22.998 06:00:48 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@19 -- # local verify_magic 00:08:22.998 06:00:48 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@21 -- # init_zram 00:08:22.998 06:00:48 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@159 -- # [[ -e /sys/class/zram-control ]] 00:08:22.998 06:00:48 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@160 -- # return 00:08:22.998 06:00:48 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@22 -- # create_zram_dev 00:08:22.998 06:00:48 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@164 -- # cat /sys/class/zram-control/hot_add 00:08:22.998 06:00:48 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@22 -- # zram_dev_id=1 00:08:22.998 06:00:48 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@23 -- # set_zram_dev 1 512M 00:08:22.998 06:00:48 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@177 -- # local id=1 00:08:22.998 06:00:48 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@178 -- # local size=512M 00:08:22.998 06:00:48 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@180 -- # [[ -e /sys/block/zram1 ]] 00:08:22.998 06:00:48 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@182 -- # echo 512M 00:08:22.998 06:00:48 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@25 -- # local ubdev=uring0 ufile=/dev/zram1 00:08:22.998 06:00:48 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@27 -- # method_bdev_uring_create_0=(['filename']='/dev/zram1' ['name']='uring0') 00:08:22.998 06:00:48 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@27 -- # local -A method_bdev_uring_create_0 00:08:22.998 06:00:48 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@33 -- # local mbdev=malloc0 mbdev_b=1048576 mbdev_bs=512 00:08:22.998 06:00:48 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@35 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:08:22.998 06:00:48 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@35 -- # local -A method_bdev_malloc_create_0 00:08:22.998 06:00:48 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@41 -- # gen_bytes 1024 00:08:22.998 06:00:48 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@98 -- # xtrace_disable 00:08:22.998 06:00:48 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:08:22.998 06:00:48 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@41 -- # magic=fnux30x3w67hp1dx8xi4sd0iaekvbh34n4t7xrmfzcyr0pwzwa96w2z5jgief323ikwcnkko8bgnw04f1eavgsz3pn7z176p668b7pr5k0xuq07197otxcyebj7ftork8liez7cz7rpuflr0o1todwue544i446k1h2oo8gh9p2b4u5wkseo1pwwn5oal8r8seew5v3y04togb2kpdmf0i4u9l4c1exda0pb1x0e4pdjm5zmk14vargi8u58jnzmox1g4x9y5h94dupc3hctr9t3gbhrldfktzy1niidla3rsyh2p8sipwclve9ai3o6y080tiwmnwqyccrme0q3n0utr0v6tx31cteppfruuxsqx8vcj50xxk6deb9bpnwlou5tsopdxywa6kzm14di9kye0yxts7quq0hsyhv51wgak7zfo3gvu2dppydcog4kl9d6u1w1d39pubzjlt6mv2r54fukira33xpjih1acdqwx76qdhu0ec1vyzewfeago82veaudqivbe4e3vl1dqerhgdal63uno704ol1e3jzo2vgva3s4p7rzhrazr7atqi3s3hizugej2zmumvvmzinmhh0l784bv5btenh6zhagus9gcamtl0bibno3vmiq9tfmxq8wmzbvmrdef8szbyyngdde8iy1bnien6u94znucsem71lsj0zic5sd6cpxee83vga4aw8r3eccju93g33em889oothu6kytipjzl6bjz9j0k90fcb7tyiovkufhsd1ccpsqnjvkml5fb5nomk7xogd9a3j2cuvyhyueoj4ocojaqinss1p4txjh7ydzbp26pe18sj0jmvhmvf8r0bmwnbsc53wooiys2tkh8epwjw5n9ksnlbewbj47ysq2skjcypu8hta58r6mmtdvmprvolsjp4eflhaah5lqo9lvymkm24pc6i0bp88kqcuuyncin10qf3ck6x9arq524i2g6j3gsh5ebm5xfeo65kplnzq1tzl09qqxkh2wsdd 00:08:22.998 06:00:48 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@42 -- # echo fnux30x3w67hp1dx8xi4sd0iaekvbh34n4t7xrmfzcyr0pwzwa96w2z5jgief323ikwcnkko8bgnw04f1eavgsz3pn7z176p668b7pr5k0xuq07197otxcyebj7ftork8liez7cz7rpuflr0o1todwue544i446k1h2oo8gh9p2b4u5wkseo1pwwn5oal8r8seew5v3y04togb2kpdmf0i4u9l4c1exda0pb1x0e4pdjm5zmk14vargi8u58jnzmox1g4x9y5h94dupc3hctr9t3gbhrldfktzy1niidla3rsyh2p8sipwclve9ai3o6y080tiwmnwqyccrme0q3n0utr0v6tx31cteppfruuxsqx8vcj50xxk6deb9bpnwlou5tsopdxywa6kzm14di9kye0yxts7quq0hsyhv51wgak7zfo3gvu2dppydcog4kl9d6u1w1d39pubzjlt6mv2r54fukira33xpjih1acdqwx76qdhu0ec1vyzewfeago82veaudqivbe4e3vl1dqerhgdal63uno704ol1e3jzo2vgva3s4p7rzhrazr7atqi3s3hizugej2zmumvvmzinmhh0l784bv5btenh6zhagus9gcamtl0bibno3vmiq9tfmxq8wmzbvmrdef8szbyyngdde8iy1bnien6u94znucsem71lsj0zic5sd6cpxee83vga4aw8r3eccju93g33em889oothu6kytipjzl6bjz9j0k90fcb7tyiovkufhsd1ccpsqnjvkml5fb5nomk7xogd9a3j2cuvyhyueoj4ocojaqinss1p4txjh7ydzbp26pe18sj0jmvhmvf8r0bmwnbsc53wooiys2tkh8epwjw5n9ksnlbewbj47ysq2skjcypu8hta58r6mmtdvmprvolsjp4eflhaah5lqo9lvymkm24pc6i0bp88kqcuuyncin10qf3ck6x9arq524i2g6j3gsh5ebm5xfeo65kplnzq1tzl09qqxkh2wsdd 00:08:22.998 06:00:48 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --oflag=append --bs=536869887 --count=1 00:08:22.998 [2024-10-01 06:00:48.517245] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:08:22.998 [2024-10-01 06:00:48.517346] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73132 ] 00:08:23.256 [2024-10-01 06:00:48.652527] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:23.256 [2024-10-01 06:00:48.684855] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:23.256 [2024-10-01 06:00:48.712140] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:24.090  Copying: 511/511 [MB] (average 1343 MBps) 00:08:24.090 00:08:24.090 06:00:49 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@54 -- # gen_conf 00:08:24.090 06:00:49 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --ob=uring0 --json /dev/fd/62 00:08:24.090 06:00:49 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:08:24.090 06:00:49 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:08:24.090 [2024-10-01 06:00:49.493558] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:08:24.090 [2024-10-01 06:00:49.493641] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73148 ] 00:08:24.090 { 00:08:24.090 "subsystems": [ 00:08:24.090 { 00:08:24.090 "subsystem": "bdev", 00:08:24.090 "config": [ 00:08:24.090 { 00:08:24.090 "params": { 00:08:24.090 "block_size": 512, 00:08:24.090 "num_blocks": 1048576, 00:08:24.090 "name": "malloc0" 00:08:24.090 }, 00:08:24.090 "method": "bdev_malloc_create" 00:08:24.090 }, 00:08:24.090 { 00:08:24.090 "params": { 00:08:24.090 "filename": "/dev/zram1", 00:08:24.090 "name": "uring0" 00:08:24.090 }, 00:08:24.090 "method": "bdev_uring_create" 00:08:24.090 }, 00:08:24.090 { 00:08:24.090 "method": "bdev_wait_for_examine" 00:08:24.090 } 00:08:24.090 ] 00:08:24.090 } 00:08:24.090 ] 00:08:24.090 } 00:08:24.090 [2024-10-01 06:00:49.625391] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:24.090 [2024-10-01 06:00:49.658765] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:24.091 [2024-10-01 06:00:49.687138] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:26.951  Copying: 220/512 [MB] (220 MBps) Copying: 443/512 [MB] (223 MBps) Copying: 512/512 [MB] (average 222 MBps) 00:08:26.951 00:08:26.951 06:00:52 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@60 -- # gen_conf 00:08:26.951 06:00:52 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 --json /dev/fd/62 00:08:26.951 06:00:52 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:08:26.951 06:00:52 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:08:26.951 [2024-10-01 06:00:52.374750] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:08:26.951 [2024-10-01 06:00:52.374853] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73186 ] 00:08:26.951 { 00:08:26.951 "subsystems": [ 00:08:26.951 { 00:08:26.951 "subsystem": "bdev", 00:08:26.951 "config": [ 00:08:26.951 { 00:08:26.951 "params": { 00:08:26.951 "block_size": 512, 00:08:26.951 "num_blocks": 1048576, 00:08:26.951 "name": "malloc0" 00:08:26.951 }, 00:08:26.951 "method": "bdev_malloc_create" 00:08:26.951 }, 00:08:26.951 { 00:08:26.951 "params": { 00:08:26.951 "filename": "/dev/zram1", 00:08:26.951 "name": "uring0" 00:08:26.951 }, 00:08:26.951 "method": "bdev_uring_create" 00:08:26.951 }, 00:08:26.951 { 00:08:26.951 "method": "bdev_wait_for_examine" 00:08:26.951 } 00:08:26.951 ] 00:08:26.951 } 00:08:26.951 ] 00:08:26.951 } 00:08:26.951 [2024-10-01 06:00:52.502773] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:26.951 [2024-10-01 06:00:52.540095] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:27.208 [2024-10-01 06:00:52.571867] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:30.445  Copying: 172/512 [MB] (172 MBps) Copying: 342/512 [MB] (170 MBps) Copying: 511/512 [MB] (168 MBps) Copying: 512/512 [MB] (average 170 MBps) 00:08:30.445 00:08:30.445 06:00:55 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@65 -- # read -rn1024 verify_magic 00:08:30.445 06:00:55 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@66 -- # [[ fnux30x3w67hp1dx8xi4sd0iaekvbh34n4t7xrmfzcyr0pwzwa96w2z5jgief323ikwcnkko8bgnw04f1eavgsz3pn7z176p668b7pr5k0xuq07197otxcyebj7ftork8liez7cz7rpuflr0o1todwue544i446k1h2oo8gh9p2b4u5wkseo1pwwn5oal8r8seew5v3y04togb2kpdmf0i4u9l4c1exda0pb1x0e4pdjm5zmk14vargi8u58jnzmox1g4x9y5h94dupc3hctr9t3gbhrldfktzy1niidla3rsyh2p8sipwclve9ai3o6y080tiwmnwqyccrme0q3n0utr0v6tx31cteppfruuxsqx8vcj50xxk6deb9bpnwlou5tsopdxywa6kzm14di9kye0yxts7quq0hsyhv51wgak7zfo3gvu2dppydcog4kl9d6u1w1d39pubzjlt6mv2r54fukira33xpjih1acdqwx76qdhu0ec1vyzewfeago82veaudqivbe4e3vl1dqerhgdal63uno704ol1e3jzo2vgva3s4p7rzhrazr7atqi3s3hizugej2zmumvvmzinmhh0l784bv5btenh6zhagus9gcamtl0bibno3vmiq9tfmxq8wmzbvmrdef8szbyyngdde8iy1bnien6u94znucsem71lsj0zic5sd6cpxee83vga4aw8r3eccju93g33em889oothu6kytipjzl6bjz9j0k90fcb7tyiovkufhsd1ccpsqnjvkml5fb5nomk7xogd9a3j2cuvyhyueoj4ocojaqinss1p4txjh7ydzbp26pe18sj0jmvhmvf8r0bmwnbsc53wooiys2tkh8epwjw5n9ksnlbewbj47ysq2skjcypu8hta58r6mmtdvmprvolsjp4eflhaah5lqo9lvymkm24pc6i0bp88kqcuuyncin10qf3ck6x9arq524i2g6j3gsh5ebm5xfeo65kplnzq1tzl09qqxkh2wsdd == \f\n\u\x\3\0\x\3\w\6\7\h\p\1\d\x\8\x\i\4\s\d\0\i\a\e\k\v\b\h\3\4\n\4\t\7\x\r\m\f\z\c\y\r\0\p\w\z\w\a\9\6\w\2\z\5\j\g\i\e\f\3\2\3\i\k\w\c\n\k\k\o\8\b\g\n\w\0\4\f\1\e\a\v\g\s\z\3\p\n\7\z\1\7\6\p\6\6\8\b\7\p\r\5\k\0\x\u\q\0\7\1\9\7\o\t\x\c\y\e\b\j\7\f\t\o\r\k\8\l\i\e\z\7\c\z\7\r\p\u\f\l\r\0\o\1\t\o\d\w\u\e\5\4\4\i\4\4\6\k\1\h\2\o\o\8\g\h\9\p\2\b\4\u\5\w\k\s\e\o\1\p\w\w\n\5\o\a\l\8\r\8\s\e\e\w\5\v\3\y\0\4\t\o\g\b\2\k\p\d\m\f\0\i\4\u\9\l\4\c\1\e\x\d\a\0\p\b\1\x\0\e\4\p\d\j\m\5\z\m\k\1\4\v\a\r\g\i\8\u\5\8\j\n\z\m\o\x\1\g\4\x\9\y\5\h\9\4\d\u\p\c\3\h\c\t\r\9\t\3\g\b\h\r\l\d\f\k\t\z\y\1\n\i\i\d\l\a\3\r\s\y\h\2\p\8\s\i\p\w\c\l\v\e\9\a\i\3\o\6\y\0\8\0\t\i\w\m\n\w\q\y\c\c\r\m\e\0\q\3\n\0\u\t\r\0\v\6\t\x\3\1\c\t\e\p\p\f\r\u\u\x\s\q\x\8\v\c\j\5\0\x\x\k\6\d\e\b\9\b\p\n\w\l\o\u\5\t\s\o\p\d\x\y\w\a\6\k\z\m\1\4\d\i\9\k\y\e\0\y\x\t\s\7\q\u\q\0\h\s\y\h\v\5\1\w\g\a\k\7\z\f\o\3\g\v\u\2\d\p\p\y\d\c\o\g\4\k\l\9\d\6\u\1\w\1\d\3\9\p\u\b\z\j\l\t\6\m\v\2\r\5\4\f\u\k\i\r\a\3\3\x\p\j\i\h\1\a\c\d\q\w\x\7\6\q\d\h\u\0\e\c\1\v\y\z\e\w\f\e\a\g\o\8\2\v\e\a\u\d\q\i\v\b\e\4\e\3\v\l\1\d\q\e\r\h\g\d\a\l\6\3\u\n\o\7\0\4\o\l\1\e\3\j\z\o\2\v\g\v\a\3\s\4\p\7\r\z\h\r\a\z\r\7\a\t\q\i\3\s\3\h\i\z\u\g\e\j\2\z\m\u\m\v\v\m\z\i\n\m\h\h\0\l\7\8\4\b\v\5\b\t\e\n\h\6\z\h\a\g\u\s\9\g\c\a\m\t\l\0\b\i\b\n\o\3\v\m\i\q\9\t\f\m\x\q\8\w\m\z\b\v\m\r\d\e\f\8\s\z\b\y\y\n\g\d\d\e\8\i\y\1\b\n\i\e\n\6\u\9\4\z\n\u\c\s\e\m\7\1\l\s\j\0\z\i\c\5\s\d\6\c\p\x\e\e\8\3\v\g\a\4\a\w\8\r\3\e\c\c\j\u\9\3\g\3\3\e\m\8\8\9\o\o\t\h\u\6\k\y\t\i\p\j\z\l\6\b\j\z\9\j\0\k\9\0\f\c\b\7\t\y\i\o\v\k\u\f\h\s\d\1\c\c\p\s\q\n\j\v\k\m\l\5\f\b\5\n\o\m\k\7\x\o\g\d\9\a\3\j\2\c\u\v\y\h\y\u\e\o\j\4\o\c\o\j\a\q\i\n\s\s\1\p\4\t\x\j\h\7\y\d\z\b\p\2\6\p\e\1\8\s\j\0\j\m\v\h\m\v\f\8\r\0\b\m\w\n\b\s\c\5\3\w\o\o\i\y\s\2\t\k\h\8\e\p\w\j\w\5\n\9\k\s\n\l\b\e\w\b\j\4\7\y\s\q\2\s\k\j\c\y\p\u\8\h\t\a\5\8\r\6\m\m\t\d\v\m\p\r\v\o\l\s\j\p\4\e\f\l\h\a\a\h\5\l\q\o\9\l\v\y\m\k\m\2\4\p\c\6\i\0\b\p\8\8\k\q\c\u\u\y\n\c\i\n\1\0\q\f\3\c\k\6\x\9\a\r\q\5\2\4\i\2\g\6\j\3\g\s\h\5\e\b\m\5\x\f\e\o\6\5\k\p\l\n\z\q\1\t\z\l\0\9\q\q\x\k\h\2\w\s\d\d ]] 00:08:30.445 06:00:55 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@68 -- # read -rn1024 verify_magic 00:08:30.445 06:00:55 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@69 -- # [[ fnux30x3w67hp1dx8xi4sd0iaekvbh34n4t7xrmfzcyr0pwzwa96w2z5jgief323ikwcnkko8bgnw04f1eavgsz3pn7z176p668b7pr5k0xuq07197otxcyebj7ftork8liez7cz7rpuflr0o1todwue544i446k1h2oo8gh9p2b4u5wkseo1pwwn5oal8r8seew5v3y04togb2kpdmf0i4u9l4c1exda0pb1x0e4pdjm5zmk14vargi8u58jnzmox1g4x9y5h94dupc3hctr9t3gbhrldfktzy1niidla3rsyh2p8sipwclve9ai3o6y080tiwmnwqyccrme0q3n0utr0v6tx31cteppfruuxsqx8vcj50xxk6deb9bpnwlou5tsopdxywa6kzm14di9kye0yxts7quq0hsyhv51wgak7zfo3gvu2dppydcog4kl9d6u1w1d39pubzjlt6mv2r54fukira33xpjih1acdqwx76qdhu0ec1vyzewfeago82veaudqivbe4e3vl1dqerhgdal63uno704ol1e3jzo2vgva3s4p7rzhrazr7atqi3s3hizugej2zmumvvmzinmhh0l784bv5btenh6zhagus9gcamtl0bibno3vmiq9tfmxq8wmzbvmrdef8szbyyngdde8iy1bnien6u94znucsem71lsj0zic5sd6cpxee83vga4aw8r3eccju93g33em889oothu6kytipjzl6bjz9j0k90fcb7tyiovkufhsd1ccpsqnjvkml5fb5nomk7xogd9a3j2cuvyhyueoj4ocojaqinss1p4txjh7ydzbp26pe18sj0jmvhmvf8r0bmwnbsc53wooiys2tkh8epwjw5n9ksnlbewbj47ysq2skjcypu8hta58r6mmtdvmprvolsjp4eflhaah5lqo9lvymkm24pc6i0bp88kqcuuyncin10qf3ck6x9arq524i2g6j3gsh5ebm5xfeo65kplnzq1tzl09qqxkh2wsdd == \f\n\u\x\3\0\x\3\w\6\7\h\p\1\d\x\8\x\i\4\s\d\0\i\a\e\k\v\b\h\3\4\n\4\t\7\x\r\m\f\z\c\y\r\0\p\w\z\w\a\9\6\w\2\z\5\j\g\i\e\f\3\2\3\i\k\w\c\n\k\k\o\8\b\g\n\w\0\4\f\1\e\a\v\g\s\z\3\p\n\7\z\1\7\6\p\6\6\8\b\7\p\r\5\k\0\x\u\q\0\7\1\9\7\o\t\x\c\y\e\b\j\7\f\t\o\r\k\8\l\i\e\z\7\c\z\7\r\p\u\f\l\r\0\o\1\t\o\d\w\u\e\5\4\4\i\4\4\6\k\1\h\2\o\o\8\g\h\9\p\2\b\4\u\5\w\k\s\e\o\1\p\w\w\n\5\o\a\l\8\r\8\s\e\e\w\5\v\3\y\0\4\t\o\g\b\2\k\p\d\m\f\0\i\4\u\9\l\4\c\1\e\x\d\a\0\p\b\1\x\0\e\4\p\d\j\m\5\z\m\k\1\4\v\a\r\g\i\8\u\5\8\j\n\z\m\o\x\1\g\4\x\9\y\5\h\9\4\d\u\p\c\3\h\c\t\r\9\t\3\g\b\h\r\l\d\f\k\t\z\y\1\n\i\i\d\l\a\3\r\s\y\h\2\p\8\s\i\p\w\c\l\v\e\9\a\i\3\o\6\y\0\8\0\t\i\w\m\n\w\q\y\c\c\r\m\e\0\q\3\n\0\u\t\r\0\v\6\t\x\3\1\c\t\e\p\p\f\r\u\u\x\s\q\x\8\v\c\j\5\0\x\x\k\6\d\e\b\9\b\p\n\w\l\o\u\5\t\s\o\p\d\x\y\w\a\6\k\z\m\1\4\d\i\9\k\y\e\0\y\x\t\s\7\q\u\q\0\h\s\y\h\v\5\1\w\g\a\k\7\z\f\o\3\g\v\u\2\d\p\p\y\d\c\o\g\4\k\l\9\d\6\u\1\w\1\d\3\9\p\u\b\z\j\l\t\6\m\v\2\r\5\4\f\u\k\i\r\a\3\3\x\p\j\i\h\1\a\c\d\q\w\x\7\6\q\d\h\u\0\e\c\1\v\y\z\e\w\f\e\a\g\o\8\2\v\e\a\u\d\q\i\v\b\e\4\e\3\v\l\1\d\q\e\r\h\g\d\a\l\6\3\u\n\o\7\0\4\o\l\1\e\3\j\z\o\2\v\g\v\a\3\s\4\p\7\r\z\h\r\a\z\r\7\a\t\q\i\3\s\3\h\i\z\u\g\e\j\2\z\m\u\m\v\v\m\z\i\n\m\h\h\0\l\7\8\4\b\v\5\b\t\e\n\h\6\z\h\a\g\u\s\9\g\c\a\m\t\l\0\b\i\b\n\o\3\v\m\i\q\9\t\f\m\x\q\8\w\m\z\b\v\m\r\d\e\f\8\s\z\b\y\y\n\g\d\d\e\8\i\y\1\b\n\i\e\n\6\u\9\4\z\n\u\c\s\e\m\7\1\l\s\j\0\z\i\c\5\s\d\6\c\p\x\e\e\8\3\v\g\a\4\a\w\8\r\3\e\c\c\j\u\9\3\g\3\3\e\m\8\8\9\o\o\t\h\u\6\k\y\t\i\p\j\z\l\6\b\j\z\9\j\0\k\9\0\f\c\b\7\t\y\i\o\v\k\u\f\h\s\d\1\c\c\p\s\q\n\j\v\k\m\l\5\f\b\5\n\o\m\k\7\x\o\g\d\9\a\3\j\2\c\u\v\y\h\y\u\e\o\j\4\o\c\o\j\a\q\i\n\s\s\1\p\4\t\x\j\h\7\y\d\z\b\p\2\6\p\e\1\8\s\j\0\j\m\v\h\m\v\f\8\r\0\b\m\w\n\b\s\c\5\3\w\o\o\i\y\s\2\t\k\h\8\e\p\w\j\w\5\n\9\k\s\n\l\b\e\w\b\j\4\7\y\s\q\2\s\k\j\c\y\p\u\8\h\t\a\5\8\r\6\m\m\t\d\v\m\p\r\v\o\l\s\j\p\4\e\f\l\h\a\a\h\5\l\q\o\9\l\v\y\m\k\m\2\4\p\c\6\i\0\b\p\8\8\k\q\c\u\u\y\n\c\i\n\1\0\q\f\3\c\k\6\x\9\a\r\q\5\2\4\i\2\g\6\j\3\g\s\h\5\e\b\m\5\x\f\e\o\6\5\k\p\l\n\z\q\1\t\z\l\0\9\q\q\x\k\h\2\w\s\d\d ]] 00:08:30.445 06:00:55 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@71 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:08:30.704 06:00:56 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --ob=malloc0 --json /dev/fd/62 00:08:30.704 06:00:56 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@75 -- # gen_conf 00:08:30.704 06:00:56 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:08:30.704 06:00:56 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:08:30.704 [2024-10-01 06:00:56.273995] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:08:30.704 [2024-10-01 06:00:56.274080] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73248 ] 00:08:30.704 { 00:08:30.704 "subsystems": [ 00:08:30.704 { 00:08:30.704 "subsystem": "bdev", 00:08:30.704 "config": [ 00:08:30.704 { 00:08:30.704 "params": { 00:08:30.704 "block_size": 512, 00:08:30.704 "num_blocks": 1048576, 00:08:30.704 "name": "malloc0" 00:08:30.704 }, 00:08:30.704 "method": "bdev_malloc_create" 00:08:30.704 }, 00:08:30.704 { 00:08:30.704 "params": { 00:08:30.704 "filename": "/dev/zram1", 00:08:30.704 "name": "uring0" 00:08:30.704 }, 00:08:30.704 "method": "bdev_uring_create" 00:08:30.704 }, 00:08:30.704 { 00:08:30.704 "method": "bdev_wait_for_examine" 00:08:30.704 } 00:08:30.704 ] 00:08:30.704 } 00:08:30.704 ] 00:08:30.704 } 00:08:30.962 [2024-10-01 06:00:56.401702] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:30.962 [2024-10-01 06:00:56.434799] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:30.962 [2024-10-01 06:00:56.463421] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:34.453  Copying: 143/512 [MB] (143 MBps) Copying: 306/512 [MB] (162 MBps) Copying: 475/512 [MB] (168 MBps) Copying: 512/512 [MB] (average 159 MBps) 00:08:34.453 00:08:34.453 06:01:00 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@82 -- # method_bdev_uring_delete_0=(['name']='uring0') 00:08:34.453 06:01:00 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@82 -- # local -A method_bdev_uring_delete_0 00:08:34.453 06:01:00 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # : 00:08:34.453 06:01:00 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # : 00:08:34.453 06:01:00 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # gen_conf 00:08:34.453 06:01:00 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --of=/dev/fd/61 --json /dev/fd/59 00:08:34.453 06:01:00 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:08:34.453 06:01:00 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:08:34.710 [2024-10-01 06:01:00.115489] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:08:34.710 [2024-10-01 06:01:00.115611] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73299 ] 00:08:34.710 { 00:08:34.710 "subsystems": [ 00:08:34.710 { 00:08:34.710 "subsystem": "bdev", 00:08:34.710 "config": [ 00:08:34.710 { 00:08:34.710 "params": { 00:08:34.710 "block_size": 512, 00:08:34.710 "num_blocks": 1048576, 00:08:34.710 "name": "malloc0" 00:08:34.710 }, 00:08:34.710 "method": "bdev_malloc_create" 00:08:34.710 }, 00:08:34.710 { 00:08:34.710 "params": { 00:08:34.710 "filename": "/dev/zram1", 00:08:34.710 "name": "uring0" 00:08:34.710 }, 00:08:34.710 "method": "bdev_uring_create" 00:08:34.710 }, 00:08:34.710 { 00:08:34.710 "params": { 00:08:34.710 "name": "uring0" 00:08:34.710 }, 00:08:34.710 "method": "bdev_uring_delete" 00:08:34.710 }, 00:08:34.710 { 00:08:34.710 "method": "bdev_wait_for_examine" 00:08:34.710 } 00:08:34.710 ] 00:08:34.710 } 00:08:34.710 ] 00:08:34.710 } 00:08:34.710 [2024-10-01 06:01:00.254753] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:34.710 [2024-10-01 06:01:00.296389] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:34.967 [2024-10-01 06:01:00.331473] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:35.224  Copying: 0/0 [B] (average 0 Bps) 00:08:35.224 00:08:35.224 06:01:00 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:08:35.224 06:01:00 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # : 00:08:35.224 06:01:00 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@650 -- # local es=0 00:08:35.224 06:01:00 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # gen_conf 00:08:35.224 06:01:00 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:08:35.224 06:01:00 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:08:35.224 06:01:00 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:35.224 06:01:00 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:08:35.224 06:01:00 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:35.224 06:01:00 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:35.224 06:01:00 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:35.224 06:01:00 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:35.224 06:01:00 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:35.224 06:01:00 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:35.224 06:01:00 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:35.224 06:01:00 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:08:35.224 [2024-10-01 06:01:00.794558] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:08:35.224 [2024-10-01 06:01:00.794697] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73329 ] 00:08:35.224 { 00:08:35.224 "subsystems": [ 00:08:35.224 { 00:08:35.224 "subsystem": "bdev", 00:08:35.224 "config": [ 00:08:35.224 { 00:08:35.224 "params": { 00:08:35.224 "block_size": 512, 00:08:35.224 "num_blocks": 1048576, 00:08:35.224 "name": "malloc0" 00:08:35.224 }, 00:08:35.224 "method": "bdev_malloc_create" 00:08:35.224 }, 00:08:35.224 { 00:08:35.224 "params": { 00:08:35.224 "filename": "/dev/zram1", 00:08:35.224 "name": "uring0" 00:08:35.224 }, 00:08:35.224 "method": "bdev_uring_create" 00:08:35.224 }, 00:08:35.224 { 00:08:35.224 "params": { 00:08:35.224 "name": "uring0" 00:08:35.224 }, 00:08:35.224 "method": "bdev_uring_delete" 00:08:35.224 }, 00:08:35.224 { 00:08:35.224 "method": "bdev_wait_for_examine" 00:08:35.224 } 00:08:35.224 ] 00:08:35.224 } 00:08:35.224 ] 00:08:35.224 } 00:08:35.482 [2024-10-01 06:01:00.931215] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:35.482 [2024-10-01 06:01:00.964604] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:35.482 [2024-10-01 06:01:00.993293] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:35.739 [2024-10-01 06:01:01.112845] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: uring0 00:08:35.739 [2024-10-01 06:01:01.112927] spdk_dd.c: 933:dd_open_bdev: *ERROR*: Could not open bdev uring0: No such device 00:08:35.739 [2024-10-01 06:01:01.112939] spdk_dd.c:1090:dd_run: *ERROR*: uring0: No such device 00:08:35.739 [2024-10-01 06:01:01.112948] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:35.739 [2024-10-01 06:01:01.281731] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:35.739 06:01:01 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@653 -- # es=237 00:08:35.740 06:01:01 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:35.740 06:01:01 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@662 -- # es=109 00:08:35.740 06:01:01 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@663 -- # case "$es" in 00:08:35.740 06:01:01 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@670 -- # es=1 00:08:35.740 06:01:01 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:35.740 06:01:01 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@99 -- # remove_zram_dev 1 00:08:35.740 06:01:01 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@168 -- # local id=1 00:08:35.740 06:01:01 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@170 -- # [[ -e /sys/block/zram1 ]] 00:08:35.740 06:01:01 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@172 -- # echo 1 00:08:35.997 06:01:01 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@173 -- # echo 1 00:08:35.997 06:01:01 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@100 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:08:35.997 00:08:35.997 real 0m13.163s 00:08:35.997 user 0m8.953s 00:08:35.997 sys 0m11.360s 00:08:35.997 06:01:01 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:35.997 06:01:01 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:08:35.997 ************************************ 00:08:35.997 END TEST dd_uring_copy 00:08:35.997 ************************************ 00:08:36.254 00:08:36.254 real 0m13.401s 00:08:36.254 user 0m9.090s 00:08:36.254 sys 0m11.466s 00:08:36.254 06:01:01 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:36.254 06:01:01 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@10 -- # set +x 00:08:36.254 ************************************ 00:08:36.254 END TEST spdk_dd_uring 00:08:36.254 ************************************ 00:08:36.254 06:01:01 spdk_dd -- dd/dd.sh@27 -- # run_test spdk_dd_sparse /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:08:36.254 06:01:01 spdk_dd -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:36.254 06:01:01 spdk_dd -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:36.254 06:01:01 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:08:36.254 ************************************ 00:08:36.254 START TEST spdk_dd_sparse 00:08:36.254 ************************************ 00:08:36.254 06:01:01 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:08:36.254 * Looking for test storage... 00:08:36.254 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:08:36.254 06:01:01 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:36.254 06:01:01 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1681 -- # lcov --version 00:08:36.254 06:01:01 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:36.254 06:01:01 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:36.254 06:01:01 spdk_dd.spdk_dd_sparse -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:36.254 06:01:01 spdk_dd.spdk_dd_sparse -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:36.254 06:01:01 spdk_dd.spdk_dd_sparse -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:36.254 06:01:01 spdk_dd.spdk_dd_sparse -- scripts/common.sh@336 -- # IFS=.-: 00:08:36.254 06:01:01 spdk_dd.spdk_dd_sparse -- scripts/common.sh@336 -- # read -ra ver1 00:08:36.254 06:01:01 spdk_dd.spdk_dd_sparse -- scripts/common.sh@337 -- # IFS=.-: 00:08:36.254 06:01:01 spdk_dd.spdk_dd_sparse -- scripts/common.sh@337 -- # read -ra ver2 00:08:36.254 06:01:01 spdk_dd.spdk_dd_sparse -- scripts/common.sh@338 -- # local 'op=<' 00:08:36.254 06:01:01 spdk_dd.spdk_dd_sparse -- scripts/common.sh@340 -- # ver1_l=2 00:08:36.254 06:01:01 spdk_dd.spdk_dd_sparse -- scripts/common.sh@341 -- # ver2_l=1 00:08:36.254 06:01:01 spdk_dd.spdk_dd_sparse -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:36.254 06:01:01 spdk_dd.spdk_dd_sparse -- scripts/common.sh@344 -- # case "$op" in 00:08:36.254 06:01:01 spdk_dd.spdk_dd_sparse -- scripts/common.sh@345 -- # : 1 00:08:36.254 06:01:01 spdk_dd.spdk_dd_sparse -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:36.254 06:01:01 spdk_dd.spdk_dd_sparse -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:36.254 06:01:01 spdk_dd.spdk_dd_sparse -- scripts/common.sh@365 -- # decimal 1 00:08:36.254 06:01:01 spdk_dd.spdk_dd_sparse -- scripts/common.sh@353 -- # local d=1 00:08:36.254 06:01:01 spdk_dd.spdk_dd_sparse -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:36.254 06:01:01 spdk_dd.spdk_dd_sparse -- scripts/common.sh@355 -- # echo 1 00:08:36.254 06:01:01 spdk_dd.spdk_dd_sparse -- scripts/common.sh@365 -- # ver1[v]=1 00:08:36.254 06:01:01 spdk_dd.spdk_dd_sparse -- scripts/common.sh@366 -- # decimal 2 00:08:36.254 06:01:01 spdk_dd.spdk_dd_sparse -- scripts/common.sh@353 -- # local d=2 00:08:36.254 06:01:01 spdk_dd.spdk_dd_sparse -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:36.254 06:01:01 spdk_dd.spdk_dd_sparse -- scripts/common.sh@355 -- # echo 2 00:08:36.254 06:01:01 spdk_dd.spdk_dd_sparse -- scripts/common.sh@366 -- # ver2[v]=2 00:08:36.254 06:01:01 spdk_dd.spdk_dd_sparse -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:36.254 06:01:01 spdk_dd.spdk_dd_sparse -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:36.254 06:01:01 spdk_dd.spdk_dd_sparse -- scripts/common.sh@368 -- # return 0 00:08:36.254 06:01:01 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:36.254 06:01:01 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:36.254 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:36.254 --rc genhtml_branch_coverage=1 00:08:36.254 --rc genhtml_function_coverage=1 00:08:36.254 --rc genhtml_legend=1 00:08:36.254 --rc geninfo_all_blocks=1 00:08:36.254 --rc geninfo_unexecuted_blocks=1 00:08:36.254 00:08:36.254 ' 00:08:36.254 06:01:01 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:36.254 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:36.254 --rc genhtml_branch_coverage=1 00:08:36.254 --rc genhtml_function_coverage=1 00:08:36.254 --rc genhtml_legend=1 00:08:36.254 --rc geninfo_all_blocks=1 00:08:36.255 --rc geninfo_unexecuted_blocks=1 00:08:36.255 00:08:36.255 ' 00:08:36.255 06:01:01 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:36.255 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:36.255 --rc genhtml_branch_coverage=1 00:08:36.255 --rc genhtml_function_coverage=1 00:08:36.255 --rc genhtml_legend=1 00:08:36.255 --rc geninfo_all_blocks=1 00:08:36.255 --rc geninfo_unexecuted_blocks=1 00:08:36.255 00:08:36.255 ' 00:08:36.255 06:01:01 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:36.255 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:36.255 --rc genhtml_branch_coverage=1 00:08:36.255 --rc genhtml_function_coverage=1 00:08:36.255 --rc genhtml_legend=1 00:08:36.255 --rc geninfo_all_blocks=1 00:08:36.255 --rc geninfo_unexecuted_blocks=1 00:08:36.255 00:08:36.255 ' 00:08:36.255 06:01:01 spdk_dd.spdk_dd_sparse -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:36.255 06:01:01 spdk_dd.spdk_dd_sparse -- scripts/common.sh@15 -- # shopt -s extglob 00:08:36.255 06:01:01 spdk_dd.spdk_dd_sparse -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:36.255 06:01:01 spdk_dd.spdk_dd_sparse -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:36.255 06:01:01 spdk_dd.spdk_dd_sparse -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:36.255 06:01:01 spdk_dd.spdk_dd_sparse -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:36.255 06:01:01 spdk_dd.spdk_dd_sparse -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:36.255 06:01:01 spdk_dd.spdk_dd_sparse -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:36.255 06:01:01 spdk_dd.spdk_dd_sparse -- paths/export.sh@5 -- # export PATH 00:08:36.255 06:01:01 spdk_dd.spdk_dd_sparse -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:36.512 06:01:01 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@108 -- # aio_disk=dd_sparse_aio_disk 00:08:36.512 06:01:01 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@109 -- # aio_bdev=dd_aio 00:08:36.512 06:01:01 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@110 -- # file1=file_zero1 00:08:36.512 06:01:01 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@111 -- # file2=file_zero2 00:08:36.512 06:01:01 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@112 -- # file3=file_zero3 00:08:36.512 06:01:01 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@113 -- # lvstore=dd_lvstore 00:08:36.512 06:01:01 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@114 -- # lvol=dd_lvol 00:08:36.512 06:01:01 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@116 -- # trap cleanup EXIT 00:08:36.512 06:01:01 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@118 -- # prepare 00:08:36.512 06:01:01 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@18 -- # truncate dd_sparse_aio_disk --size 104857600 00:08:36.512 06:01:01 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@20 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 00:08:36.512 1+0 records in 00:08:36.512 1+0 records out 00:08:36.512 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00609945 s, 688 MB/s 00:08:36.512 06:01:01 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@21 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=4 00:08:36.512 1+0 records in 00:08:36.512 1+0 records out 00:08:36.512 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00704367 s, 595 MB/s 00:08:36.512 06:01:01 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@22 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=8 00:08:36.512 1+0 records in 00:08:36.512 1+0 records out 00:08:36.512 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00390555 s, 1.1 GB/s 00:08:36.512 06:01:01 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@120 -- # run_test dd_sparse_file_to_file file_to_file 00:08:36.512 06:01:01 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:36.512 06:01:01 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:36.512 06:01:01 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:08:36.512 ************************************ 00:08:36.512 START TEST dd_sparse_file_to_file 00:08:36.512 ************************************ 00:08:36.512 06:01:01 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1125 -- # file_to_file 00:08:36.512 06:01:01 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@26 -- # local stat1_s stat1_b 00:08:36.512 06:01:01 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@27 -- # local stat2_s stat2_b 00:08:36.512 06:01:01 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:08:36.512 06:01:01 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # local -A method_bdev_aio_create_0 00:08:36.512 06:01:01 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # method_bdev_lvol_create_lvstore_1=(['bdev_name']='dd_aio' ['lvs_name']='dd_lvstore') 00:08:36.512 06:01:01 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # local -A method_bdev_lvol_create_lvstore_1 00:08:36.512 06:01:01 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero1 --of=file_zero2 --bs=12582912 --sparse --json /dev/fd/62 00:08:36.512 06:01:01 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # gen_conf 00:08:36.512 06:01:01 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/common.sh@31 -- # xtrace_disable 00:08:36.512 06:01:01 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:08:36.512 [2024-10-01 06:01:01.965062] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:08:36.512 [2024-10-01 06:01:01.965336] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73423 ] 00:08:36.512 { 00:08:36.512 "subsystems": [ 00:08:36.512 { 00:08:36.512 "subsystem": "bdev", 00:08:36.512 "config": [ 00:08:36.512 { 00:08:36.512 "params": { 00:08:36.512 "block_size": 4096, 00:08:36.512 "filename": "dd_sparse_aio_disk", 00:08:36.512 "name": "dd_aio" 00:08:36.512 }, 00:08:36.512 "method": "bdev_aio_create" 00:08:36.512 }, 00:08:36.512 { 00:08:36.512 "params": { 00:08:36.512 "lvs_name": "dd_lvstore", 00:08:36.512 "bdev_name": "dd_aio" 00:08:36.512 }, 00:08:36.512 "method": "bdev_lvol_create_lvstore" 00:08:36.512 }, 00:08:36.512 { 00:08:36.512 "method": "bdev_wait_for_examine" 00:08:36.512 } 00:08:36.512 ] 00:08:36.512 } 00:08:36.512 ] 00:08:36.512 } 00:08:36.512 [2024-10-01 06:01:02.105073] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:36.770 [2024-10-01 06:01:02.146023] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:36.770 [2024-10-01 06:01:02.179201] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:37.028  Copying: 12/36 [MB] (average 1000 MBps) 00:08:37.028 00:08:37.028 06:01:02 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat --printf=%s file_zero1 00:08:37.028 06:01:02 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat1_s=37748736 00:08:37.028 06:01:02 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat --printf=%s file_zero2 00:08:37.028 06:01:02 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat2_s=37748736 00:08:37.028 06:01:02 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@50 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:08:37.028 06:01:02 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat --printf=%b file_zero1 00:08:37.028 06:01:02 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat1_b=24576 00:08:37.028 06:01:02 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat --printf=%b file_zero2 00:08:37.028 ************************************ 00:08:37.028 END TEST dd_sparse_file_to_file 00:08:37.028 ************************************ 00:08:37.028 06:01:02 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat2_b=24576 00:08:37.028 06:01:02 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@55 -- # [[ 24576 == \2\4\5\7\6 ]] 00:08:37.028 00:08:37.028 real 0m0.528s 00:08:37.028 user 0m0.308s 00:08:37.028 sys 0m0.265s 00:08:37.028 06:01:02 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:37.028 06:01:02 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:08:37.028 06:01:02 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@121 -- # run_test dd_sparse_file_to_bdev file_to_bdev 00:08:37.028 06:01:02 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:37.028 06:01:02 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:37.028 06:01:02 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:08:37.028 ************************************ 00:08:37.028 START TEST dd_sparse_file_to_bdev 00:08:37.028 ************************************ 00:08:37.028 06:01:02 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1125 -- # file_to_bdev 00:08:37.028 06:01:02 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:08:37.028 06:01:02 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # local -A method_bdev_aio_create_0 00:08:37.028 06:01:02 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # method_bdev_lvol_create_1=(['lvs_name']='dd_lvstore' ['lvol_name']='dd_lvol' ['size_in_mib']='36' ['thin_provision']='true') 00:08:37.028 06:01:02 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # local -A method_bdev_lvol_create_1 00:08:37.028 06:01:02 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero2 --ob=dd_lvstore/dd_lvol --bs=12582912 --sparse --json /dev/fd/62 00:08:37.028 06:01:02 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # gen_conf 00:08:37.028 06:01:02 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:08:37.028 06:01:02 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:08:37.028 [2024-10-01 06:01:02.545189] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:08:37.028 [2024-10-01 06:01:02.545274] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73464 ] 00:08:37.028 { 00:08:37.028 "subsystems": [ 00:08:37.028 { 00:08:37.028 "subsystem": "bdev", 00:08:37.028 "config": [ 00:08:37.028 { 00:08:37.028 "params": { 00:08:37.028 "block_size": 4096, 00:08:37.028 "filename": "dd_sparse_aio_disk", 00:08:37.028 "name": "dd_aio" 00:08:37.028 }, 00:08:37.028 "method": "bdev_aio_create" 00:08:37.028 }, 00:08:37.028 { 00:08:37.028 "params": { 00:08:37.028 "lvs_name": "dd_lvstore", 00:08:37.028 "lvol_name": "dd_lvol", 00:08:37.028 "size_in_mib": 36, 00:08:37.028 "thin_provision": true 00:08:37.028 }, 00:08:37.028 "method": "bdev_lvol_create" 00:08:37.028 }, 00:08:37.028 { 00:08:37.028 "method": "bdev_wait_for_examine" 00:08:37.028 } 00:08:37.028 ] 00:08:37.028 } 00:08:37.028 ] 00:08:37.028 } 00:08:37.287 [2024-10-01 06:01:02.686206] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:37.287 [2024-10-01 06:01:02.726646] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:37.287 [2024-10-01 06:01:02.761128] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:37.544  Copying: 12/36 [MB] (average 521 MBps) 00:08:37.544 00:08:37.544 00:08:37.544 real 0m0.507s 00:08:37.544 user 0m0.320s 00:08:37.544 sys 0m0.243s 00:08:37.544 06:01:02 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:37.544 06:01:02 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:08:37.544 ************************************ 00:08:37.544 END TEST dd_sparse_file_to_bdev 00:08:37.544 ************************************ 00:08:37.544 06:01:03 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@122 -- # run_test dd_sparse_bdev_to_file bdev_to_file 00:08:37.544 06:01:03 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:37.544 06:01:03 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:37.544 06:01:03 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:08:37.544 ************************************ 00:08:37.544 START TEST dd_sparse_bdev_to_file 00:08:37.544 ************************************ 00:08:37.544 06:01:03 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1125 -- # bdev_to_file 00:08:37.544 06:01:03 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@81 -- # local stat2_s stat2_b 00:08:37.544 06:01:03 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@82 -- # local stat3_s stat3_b 00:08:37.544 06:01:03 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:08:37.544 06:01:03 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # local -A method_bdev_aio_create_0 00:08:37.545 06:01:03 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=dd_lvstore/dd_lvol --of=file_zero3 --bs=12582912 --sparse --json /dev/fd/62 00:08:37.545 06:01:03 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # gen_conf 00:08:37.545 06:01:03 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/common.sh@31 -- # xtrace_disable 00:08:37.545 06:01:03 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:08:37.545 [2024-10-01 06:01:03.108227] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:08:37.545 [2024-10-01 06:01:03.108347] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73498 ] 00:08:37.545 { 00:08:37.545 "subsystems": [ 00:08:37.545 { 00:08:37.545 "subsystem": "bdev", 00:08:37.545 "config": [ 00:08:37.545 { 00:08:37.545 "params": { 00:08:37.545 "block_size": 4096, 00:08:37.545 "filename": "dd_sparse_aio_disk", 00:08:37.545 "name": "dd_aio" 00:08:37.545 }, 00:08:37.545 "method": "bdev_aio_create" 00:08:37.545 }, 00:08:37.545 { 00:08:37.545 "method": "bdev_wait_for_examine" 00:08:37.545 } 00:08:37.545 ] 00:08:37.545 } 00:08:37.545 ] 00:08:37.545 } 00:08:37.803 [2024-10-01 06:01:03.247810] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:37.803 [2024-10-01 06:01:03.292450] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:37.803 [2024-10-01 06:01:03.328027] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:38.061  Copying: 12/36 [MB] (average 1000 MBps) 00:08:38.061 00:08:38.061 06:01:03 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat --printf=%s file_zero2 00:08:38.061 06:01:03 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat2_s=37748736 00:08:38.061 06:01:03 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat --printf=%s file_zero3 00:08:38.061 06:01:03 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat3_s=37748736 00:08:38.061 06:01:03 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@100 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:08:38.061 06:01:03 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat --printf=%b file_zero2 00:08:38.061 06:01:03 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat2_b=24576 00:08:38.061 06:01:03 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat --printf=%b file_zero3 00:08:38.061 ************************************ 00:08:38.061 END TEST dd_sparse_bdev_to_file 00:08:38.061 ************************************ 00:08:38.061 06:01:03 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat3_b=24576 00:08:38.061 06:01:03 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@105 -- # [[ 24576 == \2\4\5\7\6 ]] 00:08:38.061 00:08:38.061 real 0m0.544s 00:08:38.061 user 0m0.333s 00:08:38.061 sys 0m0.275s 00:08:38.061 06:01:03 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:38.061 06:01:03 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:08:38.061 06:01:03 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@1 -- # cleanup 00:08:38.061 06:01:03 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@11 -- # rm dd_sparse_aio_disk 00:08:38.061 06:01:03 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@12 -- # rm file_zero1 00:08:38.061 06:01:03 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@13 -- # rm file_zero2 00:08:38.061 06:01:03 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@14 -- # rm file_zero3 00:08:38.061 ************************************ 00:08:38.061 END TEST spdk_dd_sparse 00:08:38.061 ************************************ 00:08:38.061 00:08:38.061 real 0m1.975s 00:08:38.061 user 0m1.134s 00:08:38.061 sys 0m1.005s 00:08:38.061 06:01:03 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:38.061 06:01:03 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:08:38.321 06:01:03 spdk_dd -- dd/dd.sh@28 -- # run_test spdk_dd_negative /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:08:38.321 06:01:03 spdk_dd -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:38.321 06:01:03 spdk_dd -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:38.321 06:01:03 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:08:38.321 ************************************ 00:08:38.321 START TEST spdk_dd_negative 00:08:38.321 ************************************ 00:08:38.321 06:01:03 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:08:38.321 * Looking for test storage... 00:08:38.321 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:08:38.321 06:01:03 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:38.321 06:01:03 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:38.321 06:01:03 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1681 -- # lcov --version 00:08:38.321 06:01:03 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:38.321 06:01:03 spdk_dd.spdk_dd_negative -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:38.321 06:01:03 spdk_dd.spdk_dd_negative -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:38.321 06:01:03 spdk_dd.spdk_dd_negative -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:38.321 06:01:03 spdk_dd.spdk_dd_negative -- scripts/common.sh@336 -- # IFS=.-: 00:08:38.321 06:01:03 spdk_dd.spdk_dd_negative -- scripts/common.sh@336 -- # read -ra ver1 00:08:38.321 06:01:03 spdk_dd.spdk_dd_negative -- scripts/common.sh@337 -- # IFS=.-: 00:08:38.321 06:01:03 spdk_dd.spdk_dd_negative -- scripts/common.sh@337 -- # read -ra ver2 00:08:38.321 06:01:03 spdk_dd.spdk_dd_negative -- scripts/common.sh@338 -- # local 'op=<' 00:08:38.321 06:01:03 spdk_dd.spdk_dd_negative -- scripts/common.sh@340 -- # ver1_l=2 00:08:38.321 06:01:03 spdk_dd.spdk_dd_negative -- scripts/common.sh@341 -- # ver2_l=1 00:08:38.321 06:01:03 spdk_dd.spdk_dd_negative -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:38.321 06:01:03 spdk_dd.spdk_dd_negative -- scripts/common.sh@344 -- # case "$op" in 00:08:38.321 06:01:03 spdk_dd.spdk_dd_negative -- scripts/common.sh@345 -- # : 1 00:08:38.321 06:01:03 spdk_dd.spdk_dd_negative -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:38.321 06:01:03 spdk_dd.spdk_dd_negative -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:38.321 06:01:03 spdk_dd.spdk_dd_negative -- scripts/common.sh@365 -- # decimal 1 00:08:38.321 06:01:03 spdk_dd.spdk_dd_negative -- scripts/common.sh@353 -- # local d=1 00:08:38.321 06:01:03 spdk_dd.spdk_dd_negative -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:38.321 06:01:03 spdk_dd.spdk_dd_negative -- scripts/common.sh@355 -- # echo 1 00:08:38.321 06:01:03 spdk_dd.spdk_dd_negative -- scripts/common.sh@365 -- # ver1[v]=1 00:08:38.321 06:01:03 spdk_dd.spdk_dd_negative -- scripts/common.sh@366 -- # decimal 2 00:08:38.321 06:01:03 spdk_dd.spdk_dd_negative -- scripts/common.sh@353 -- # local d=2 00:08:38.321 06:01:03 spdk_dd.spdk_dd_negative -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:38.321 06:01:03 spdk_dd.spdk_dd_negative -- scripts/common.sh@355 -- # echo 2 00:08:38.321 06:01:03 spdk_dd.spdk_dd_negative -- scripts/common.sh@366 -- # ver2[v]=2 00:08:38.321 06:01:03 spdk_dd.spdk_dd_negative -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:38.321 06:01:03 spdk_dd.spdk_dd_negative -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:38.321 06:01:03 spdk_dd.spdk_dd_negative -- scripts/common.sh@368 -- # return 0 00:08:38.321 06:01:03 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:38.321 06:01:03 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:38.321 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:38.321 --rc genhtml_branch_coverage=1 00:08:38.321 --rc genhtml_function_coverage=1 00:08:38.321 --rc genhtml_legend=1 00:08:38.321 --rc geninfo_all_blocks=1 00:08:38.321 --rc geninfo_unexecuted_blocks=1 00:08:38.321 00:08:38.321 ' 00:08:38.321 06:01:03 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:38.321 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:38.321 --rc genhtml_branch_coverage=1 00:08:38.321 --rc genhtml_function_coverage=1 00:08:38.321 --rc genhtml_legend=1 00:08:38.321 --rc geninfo_all_blocks=1 00:08:38.321 --rc geninfo_unexecuted_blocks=1 00:08:38.321 00:08:38.321 ' 00:08:38.321 06:01:03 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:38.321 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:38.321 --rc genhtml_branch_coverage=1 00:08:38.321 --rc genhtml_function_coverage=1 00:08:38.321 --rc genhtml_legend=1 00:08:38.321 --rc geninfo_all_blocks=1 00:08:38.321 --rc geninfo_unexecuted_blocks=1 00:08:38.321 00:08:38.321 ' 00:08:38.321 06:01:03 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:38.321 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:38.321 --rc genhtml_branch_coverage=1 00:08:38.321 --rc genhtml_function_coverage=1 00:08:38.321 --rc genhtml_legend=1 00:08:38.321 --rc geninfo_all_blocks=1 00:08:38.321 --rc geninfo_unexecuted_blocks=1 00:08:38.321 00:08:38.321 ' 00:08:38.321 06:01:03 spdk_dd.spdk_dd_negative -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:38.321 06:01:03 spdk_dd.spdk_dd_negative -- scripts/common.sh@15 -- # shopt -s extglob 00:08:38.321 06:01:03 spdk_dd.spdk_dd_negative -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:38.321 06:01:03 spdk_dd.spdk_dd_negative -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:38.321 06:01:03 spdk_dd.spdk_dd_negative -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:38.321 06:01:03 spdk_dd.spdk_dd_negative -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:38.321 06:01:03 spdk_dd.spdk_dd_negative -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:38.322 06:01:03 spdk_dd.spdk_dd_negative -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:38.322 06:01:03 spdk_dd.spdk_dd_negative -- paths/export.sh@5 -- # export PATH 00:08:38.322 06:01:03 spdk_dd.spdk_dd_negative -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:38.322 06:01:03 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@210 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:38.322 06:01:03 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@211 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:38.322 06:01:03 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@213 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:38.322 06:01:03 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@214 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:38.322 06:01:03 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@216 -- # run_test dd_invalid_arguments invalid_arguments 00:08:38.322 06:01:03 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:38.322 06:01:03 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:38.322 06:01:03 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:38.322 ************************************ 00:08:38.322 START TEST dd_invalid_arguments 00:08:38.322 ************************************ 00:08:38.322 06:01:03 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1125 -- # invalid_arguments 00:08:38.322 06:01:03 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- dd/negative_dd.sh@12 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:08:38.322 06:01:03 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@650 -- # local es=0 00:08:38.322 06:01:03 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:08:38.322 06:01:03 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:38.322 06:01:03 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:38.322 06:01:03 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:38.322 06:01:03 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:38.322 06:01:03 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:38.322 06:01:03 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:38.322 06:01:03 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:38.322 06:01:03 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:38.322 06:01:03 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:08:38.581 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd [options] 00:08:38.581 00:08:38.581 CPU options: 00:08:38.581 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced for DPDK 00:08:38.581 (like [0,1,10]) 00:08:38.581 --lcores lcore to CPU mapping list. The list is in the format: 00:08:38.581 [<,lcores[@CPUs]>...] 00:08:38.581 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:08:38.581 Within the group, '-' is used for range separator, 00:08:38.581 ',' is used for single number separator. 00:08:38.581 '( )' can be omitted for single element group, 00:08:38.581 '@' can be omitted if cpus and lcores have the same value 00:08:38.581 --disable-cpumask-locks Disable CPU core lock files. 00:08:38.581 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all 00:08:38.581 pollers in the app support interrupt mode) 00:08:38.581 -p, --main-core main (primary) core for DPDK 00:08:38.581 00:08:38.581 Configuration options: 00:08:38.581 -c, --config, --json JSON config file 00:08:38.581 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:08:38.581 --no-rpc-server skip RPC server initialization. This option ignores '--rpc-socket' value. 00:08:38.581 --wait-for-rpc wait for RPCs to initialize subsystems 00:08:38.581 --rpcs-allowed comma-separated list of permitted RPCS 00:08:38.581 --json-ignore-init-errors don't exit on invalid config entry 00:08:38.581 00:08:38.581 Memory options: 00:08:38.581 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:08:38.581 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:08:38.581 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:08:38.581 -R, --huge-unlink unlink huge files after initialization 00:08:38.581 -n, --mem-channels number of memory channels used for DPDK 00:08:38.581 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:08:38.581 --msg-mempool-size global message memory pool size in count (default: 262143) 00:08:38.581 --no-huge run without using hugepages 00:08:38.581 --enforce-numa enforce NUMA allocations from the specified NUMA node 00:08:38.581 -i, --shm-id shared memory ID (optional) 00:08:38.581 -g, --single-file-segments force creating just one hugetlbfs file 00:08:38.581 00:08:38.581 PCI options: 00:08:38.581 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:08:38.581 -B, --pci-blocked pci addr to block (can be used more than once) 00:08:38.581 -u, --no-pci disable PCI access 00:08:38.581 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:08:38.581 00:08:38.581 Log options: 00:08:38.581 -L, --logflag enable log flag (all, accel, accel_dsa, accel_iaa, accel_ioat, aio, 00:08:38.582 app_config, app_rpc, bdev, bdev_concat, bdev_ftl, bdev_malloc, 00:08:38.582 bdev_null, bdev_nvme, bdev_raid, bdev_raid0, bdev_raid1, bdev_raid_sb, 00:08:38.582 blob, blob_esnap, blob_rw, blobfs, blobfs_bdev, blobfs_bdev_rpc, 00:08:38.582 blobfs_rw, fsdev, fsdev_aio, ftl_core, ftl_init, gpt_parse, idxd, ioat, 00:08:38.582 iscsi_init, json_util, keyring, log_rpc, lvol, lvol_rpc, notify_rpc, 00:08:38.582 nvme, nvme_auth, nvme_cuse, opal, reactor, rpc, rpc_client, sock, 00:08:38.582 sock_posix, spdk_aio_mgr_io, thread, trace, uring, vbdev_delay, 00:08:38.582 vbdev_gpt, vbdev_lvol, vbdev_opal, vbdev_passthru, vbdev_split, 00:08:38.582 vbdev_zone_block, vfio_pci, vfio_user, virtio, virtio_blk, virtio_dev, 00:08:38.582 virtio_pci, virtio_user, virtio_vfio_user, vmd) 00:08:38.582 --silence-noticelog disable notice level logging to stderr 00:08:38.582 00:08:38.582 Trace options: 00:08:38.582 --num-trace-entries number of trace entries for each core, must be power of 2, 00:08:38.582 setting 0 to disable trace (default 32768) 00:08:38.582 Tracepoints vary in size and can use more than one trace entry. 00:08:38.582 -e, --tpoint-group [:] 00:08:38.582 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd: unrecognized option '--ii=' 00:08:38.582 [2024-10-01 06:01:03.979530] spdk_dd.c:1480:main: *ERROR*: Invalid arguments 00:08:38.582 group_name - tracepoint group name for spdk trace buffers (bdev, ftl, 00:08:38.582 blobfs, dsa, thread, nvme_pcie, iaa, nvme_tcp, bdev_nvme, sock, blob, 00:08:38.582 bdev_raid, all). 00:08:38.582 tpoint_mask - tracepoint mask for enabling individual tpoints inside 00:08:38.582 a tracepoint group. First tpoint inside a group can be enabled by 00:08:38.582 setting tpoint_mask to 1 (e.g. bdev:0x1). Groups and masks can be 00:08:38.582 combined (e.g. thread,bdev:0x1). All available tpoints can be found 00:08:38.582 in /include/spdk_internal/trace_defs.h 00:08:38.582 00:08:38.582 Other options: 00:08:38.582 -h, --help show this usage 00:08:38.582 -v, --version print SPDK version 00:08:38.582 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:08:38.582 --env-context Opaque context for use of the env implementation 00:08:38.582 00:08:38.582 Application specific: 00:08:38.582 [--------- DD Options ---------] 00:08:38.582 --if Input file. Must specify either --if or --ib. 00:08:38.582 --ib Input bdev. Must specifier either --if or --ib 00:08:38.582 --of Output file. Must specify either --of or --ob. 00:08:38.582 --ob Output bdev. Must specify either --of or --ob. 00:08:38.582 --iflag Input file flags. 00:08:38.582 --oflag Output file flags. 00:08:38.582 --bs I/O unit size (default: 4096) 00:08:38.582 --qd Queue depth (default: 2) 00:08:38.582 --count I/O unit count. The number of I/O units to copy. (default: all) 00:08:38.582 --skip Skip this many I/O units at start of input. (default: 0) 00:08:38.582 --seek Skip this many I/O units at start of output. (default: 0) 00:08:38.582 --aio Force usage of AIO. (by default io_uring is used if available) 00:08:38.582 --sparse Enable hole skipping in input target 00:08:38.582 Available iflag and oflag values: 00:08:38.582 append - append mode 00:08:38.582 direct - use direct I/O for data 00:08:38.582 directory - fail unless a directory 00:08:38.582 dsync - use synchronized I/O for data 00:08:38.582 noatime - do not update access time 00:08:38.582 noctty - do not assign controlling terminal from file 00:08:38.582 nofollow - do not follow symlinks 00:08:38.582 nonblock - use non-blocking I/O 00:08:38.582 sync - use synchronized I/O for data and metadata 00:08:38.582 ************************************ 00:08:38.582 END TEST dd_invalid_arguments 00:08:38.582 ************************************ 00:08:38.582 06:01:04 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@653 -- # es=2 00:08:38.582 06:01:04 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:38.582 06:01:04 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:38.582 06:01:04 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:38.582 00:08:38.582 real 0m0.085s 00:08:38.582 user 0m0.053s 00:08:38.582 sys 0m0.031s 00:08:38.582 06:01:04 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:38.582 06:01:04 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@10 -- # set +x 00:08:38.582 06:01:04 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@217 -- # run_test dd_double_input double_input 00:08:38.582 06:01:04 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:38.582 06:01:04 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:38.582 06:01:04 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:38.582 ************************************ 00:08:38.582 START TEST dd_double_input 00:08:38.582 ************************************ 00:08:38.582 06:01:04 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1125 -- # double_input 00:08:38.582 06:01:04 spdk_dd.spdk_dd_negative.dd_double_input -- dd/negative_dd.sh@19 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:08:38.582 06:01:04 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@650 -- # local es=0 00:08:38.582 06:01:04 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:08:38.582 06:01:04 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:38.582 06:01:04 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:38.582 06:01:04 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:38.582 06:01:04 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:38.582 06:01:04 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:38.582 06:01:04 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:38.582 06:01:04 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:38.582 06:01:04 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:38.582 06:01:04 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:08:38.582 [2024-10-01 06:01:04.112105] spdk_dd.c:1487:main: *ERROR*: You may specify either --if or --ib, but not both. 00:08:38.582 06:01:04 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@653 -- # es=22 00:08:38.582 06:01:04 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:38.582 06:01:04 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:38.582 06:01:04 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:38.582 00:08:38.582 real 0m0.072s 00:08:38.582 user 0m0.041s 00:08:38.582 sys 0m0.030s 00:08:38.582 06:01:04 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:38.582 06:01:04 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@10 -- # set +x 00:08:38.582 ************************************ 00:08:38.582 END TEST dd_double_input 00:08:38.582 ************************************ 00:08:38.582 06:01:04 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@218 -- # run_test dd_double_output double_output 00:08:38.582 06:01:04 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:38.582 06:01:04 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:38.582 06:01:04 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:38.582 ************************************ 00:08:38.582 START TEST dd_double_output 00:08:38.582 ************************************ 00:08:38.582 06:01:04 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1125 -- # double_output 00:08:38.582 06:01:04 spdk_dd.spdk_dd_negative.dd_double_output -- dd/negative_dd.sh@27 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:08:38.582 06:01:04 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@650 -- # local es=0 00:08:38.582 06:01:04 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:08:38.582 06:01:04 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:38.582 06:01:04 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:38.582 06:01:04 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:38.582 06:01:04 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:38.582 06:01:04 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:38.582 06:01:04 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:38.582 06:01:04 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:38.582 06:01:04 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:38.582 06:01:04 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:08:38.841 [2024-10-01 06:01:04.235448] spdk_dd.c:1493:main: *ERROR*: You may specify either --of or --ob, but not both. 00:08:38.841 06:01:04 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@653 -- # es=22 00:08:38.841 06:01:04 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:38.841 06:01:04 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:38.841 06:01:04 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:38.841 00:08:38.841 real 0m0.074s 00:08:38.841 user 0m0.045s 00:08:38.841 sys 0m0.028s 00:08:38.841 ************************************ 00:08:38.841 END TEST dd_double_output 00:08:38.841 ************************************ 00:08:38.841 06:01:04 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:38.841 06:01:04 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@10 -- # set +x 00:08:38.842 06:01:04 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@219 -- # run_test dd_no_input no_input 00:08:38.842 06:01:04 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:38.842 06:01:04 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:38.842 06:01:04 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:38.842 ************************************ 00:08:38.842 START TEST dd_no_input 00:08:38.842 ************************************ 00:08:38.842 06:01:04 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1125 -- # no_input 00:08:38.842 06:01:04 spdk_dd.spdk_dd_negative.dd_no_input -- dd/negative_dd.sh@35 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:08:38.842 06:01:04 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@650 -- # local es=0 00:08:38.842 06:01:04 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:08:38.842 06:01:04 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:38.842 06:01:04 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:38.842 06:01:04 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:38.842 06:01:04 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:38.842 06:01:04 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:38.842 06:01:04 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:38.842 06:01:04 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:38.842 06:01:04 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:38.842 06:01:04 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:08:38.842 [2024-10-01 06:01:04.350783] spdk_dd.c:1499:main: *ERROR*: You must specify either --if or --ib 00:08:38.842 06:01:04 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@653 -- # es=22 00:08:38.842 06:01:04 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:38.842 06:01:04 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:38.842 06:01:04 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:38.842 00:08:38.842 real 0m0.062s 00:08:38.842 user 0m0.029s 00:08:38.842 sys 0m0.031s 00:08:38.842 06:01:04 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:38.842 ************************************ 00:08:38.842 END TEST dd_no_input 00:08:38.842 ************************************ 00:08:38.842 06:01:04 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@10 -- # set +x 00:08:38.842 06:01:04 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@220 -- # run_test dd_no_output no_output 00:08:38.842 06:01:04 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:38.842 06:01:04 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:38.842 06:01:04 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:38.842 ************************************ 00:08:38.842 START TEST dd_no_output 00:08:38.842 ************************************ 00:08:38.842 06:01:04 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1125 -- # no_output 00:08:38.842 06:01:04 spdk_dd.spdk_dd_negative.dd_no_output -- dd/negative_dd.sh@41 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:38.842 06:01:04 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@650 -- # local es=0 00:08:38.842 06:01:04 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:38.842 06:01:04 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:38.842 06:01:04 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:38.842 06:01:04 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:38.842 06:01:04 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:38.842 06:01:04 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:38.842 06:01:04 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:38.842 06:01:04 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:38.842 06:01:04 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:38.842 06:01:04 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:39.128 [2024-10-01 06:01:04.463298] spdk_dd.c:1505:main: *ERROR*: You must specify either --of or --ob 00:08:39.128 06:01:04 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@653 -- # es=22 00:08:39.128 06:01:04 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:39.128 06:01:04 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:39.128 06:01:04 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:39.128 ************************************ 00:08:39.128 END TEST dd_no_output 00:08:39.128 ************************************ 00:08:39.128 00:08:39.128 real 0m0.060s 00:08:39.128 user 0m0.038s 00:08:39.128 sys 0m0.022s 00:08:39.128 06:01:04 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:39.128 06:01:04 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@10 -- # set +x 00:08:39.128 06:01:04 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@221 -- # run_test dd_wrong_blocksize wrong_blocksize 00:08:39.128 06:01:04 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:39.128 06:01:04 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:39.128 06:01:04 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:39.128 ************************************ 00:08:39.128 START TEST dd_wrong_blocksize 00:08:39.128 ************************************ 00:08:39.128 06:01:04 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1125 -- # wrong_blocksize 00:08:39.128 06:01:04 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- dd/negative_dd.sh@47 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:08:39.128 06:01:04 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@650 -- # local es=0 00:08:39.128 06:01:04 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:08:39.128 06:01:04 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:39.128 06:01:04 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:39.128 06:01:04 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:39.128 06:01:04 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:39.128 06:01:04 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:39.128 06:01:04 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:39.128 06:01:04 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:39.128 06:01:04 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:39.128 06:01:04 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:08:39.128 [2024-10-01 06:01:04.584954] spdk_dd.c:1511:main: *ERROR*: Invalid --bs value 00:08:39.128 06:01:04 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@653 -- # es=22 00:08:39.128 06:01:04 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:39.128 06:01:04 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:39.128 06:01:04 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:39.128 00:08:39.128 real 0m0.075s 00:08:39.128 user 0m0.046s 00:08:39.128 sys 0m0.029s 00:08:39.128 06:01:04 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:39.128 06:01:04 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@10 -- # set +x 00:08:39.128 ************************************ 00:08:39.128 END TEST dd_wrong_blocksize 00:08:39.128 ************************************ 00:08:39.128 06:01:04 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@222 -- # run_test dd_smaller_blocksize smaller_blocksize 00:08:39.128 06:01:04 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:39.128 06:01:04 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:39.128 06:01:04 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:39.128 ************************************ 00:08:39.128 START TEST dd_smaller_blocksize 00:08:39.128 ************************************ 00:08:39.128 06:01:04 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1125 -- # smaller_blocksize 00:08:39.128 06:01:04 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- dd/negative_dd.sh@55 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:08:39.128 06:01:04 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@650 -- # local es=0 00:08:39.129 06:01:04 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:08:39.129 06:01:04 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:39.129 06:01:04 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:39.129 06:01:04 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:39.129 06:01:04 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:39.129 06:01:04 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:39.129 06:01:04 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:39.129 06:01:04 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:39.129 06:01:04 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:39.129 06:01:04 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:08:39.129 [2024-10-01 06:01:04.709852] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:08:39.129 [2024-10-01 06:01:04.709954] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73724 ] 00:08:39.416 [2024-10-01 06:01:04.849424] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:39.416 [2024-10-01 06:01:04.890589] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:39.416 [2024-10-01 06:01:04.922616] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:39.416 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:08:39.416 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:08:39.416 [2024-10-01 06:01:04.940510] spdk_dd.c:1184:dd_run: *ERROR*: Cannot allocate memory - try smaller block size value 00:08:39.416 [2024-10-01 06:01:04.940541] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:39.416 [2024-10-01 06:01:05.007278] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:39.685 06:01:05 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@653 -- # es=244 00:08:39.685 06:01:05 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:39.685 06:01:05 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@662 -- # es=116 00:08:39.685 ************************************ 00:08:39.685 END TEST dd_smaller_blocksize 00:08:39.685 ************************************ 00:08:39.685 06:01:05 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@663 -- # case "$es" in 00:08:39.685 06:01:05 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@670 -- # es=1 00:08:39.685 06:01:05 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:39.685 00:08:39.685 real 0m0.426s 00:08:39.685 user 0m0.218s 00:08:39.685 sys 0m0.103s 00:08:39.685 06:01:05 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:39.685 06:01:05 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@10 -- # set +x 00:08:39.685 06:01:05 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@223 -- # run_test dd_invalid_count invalid_count 00:08:39.685 06:01:05 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:39.685 06:01:05 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:39.685 06:01:05 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:39.685 ************************************ 00:08:39.685 START TEST dd_invalid_count 00:08:39.685 ************************************ 00:08:39.685 06:01:05 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1125 -- # invalid_count 00:08:39.685 06:01:05 spdk_dd.spdk_dd_negative.dd_invalid_count -- dd/negative_dd.sh@63 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:08:39.685 06:01:05 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@650 -- # local es=0 00:08:39.685 06:01:05 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:08:39.685 06:01:05 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:39.685 06:01:05 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:39.686 06:01:05 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:39.686 06:01:05 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:39.686 06:01:05 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:39.686 06:01:05 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:39.686 06:01:05 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:39.686 06:01:05 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:39.686 06:01:05 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:08:39.686 [2024-10-01 06:01:05.195788] spdk_dd.c:1517:main: *ERROR*: Invalid --count value 00:08:39.686 06:01:05 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@653 -- # es=22 00:08:39.686 06:01:05 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:39.686 ************************************ 00:08:39.686 END TEST dd_invalid_count 00:08:39.686 ************************************ 00:08:39.686 06:01:05 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:39.686 06:01:05 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:39.686 00:08:39.686 real 0m0.077s 00:08:39.686 user 0m0.053s 00:08:39.686 sys 0m0.023s 00:08:39.686 06:01:05 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:39.686 06:01:05 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@10 -- # set +x 00:08:39.686 06:01:05 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@224 -- # run_test dd_invalid_oflag invalid_oflag 00:08:39.686 06:01:05 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:39.686 06:01:05 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:39.686 06:01:05 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:39.686 ************************************ 00:08:39.686 START TEST dd_invalid_oflag 00:08:39.686 ************************************ 00:08:39.686 06:01:05 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1125 -- # invalid_oflag 00:08:39.686 06:01:05 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- dd/negative_dd.sh@71 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:08:39.686 06:01:05 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@650 -- # local es=0 00:08:39.686 06:01:05 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:08:39.686 06:01:05 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:39.686 06:01:05 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:39.686 06:01:05 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:39.686 06:01:05 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:39.686 06:01:05 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:39.686 06:01:05 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:39.686 06:01:05 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:39.686 06:01:05 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:39.686 06:01:05 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:08:39.945 [2024-10-01 06:01:05.320344] spdk_dd.c:1523:main: *ERROR*: --oflags may be used only with --of 00:08:39.945 06:01:05 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@653 -- # es=22 00:08:39.945 06:01:05 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:39.945 ************************************ 00:08:39.945 END TEST dd_invalid_oflag 00:08:39.945 ************************************ 00:08:39.945 06:01:05 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:39.945 06:01:05 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:39.945 00:08:39.945 real 0m0.075s 00:08:39.945 user 0m0.046s 00:08:39.945 sys 0m0.028s 00:08:39.945 06:01:05 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:39.945 06:01:05 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@10 -- # set +x 00:08:39.945 06:01:05 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@225 -- # run_test dd_invalid_iflag invalid_iflag 00:08:39.945 06:01:05 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:39.945 06:01:05 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:39.945 06:01:05 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:39.945 ************************************ 00:08:39.945 START TEST dd_invalid_iflag 00:08:39.945 ************************************ 00:08:39.945 06:01:05 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1125 -- # invalid_iflag 00:08:39.945 06:01:05 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- dd/negative_dd.sh@79 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:08:39.945 06:01:05 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@650 -- # local es=0 00:08:39.945 06:01:05 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:08:39.945 06:01:05 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:39.945 06:01:05 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:39.945 06:01:05 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:39.945 06:01:05 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:39.945 06:01:05 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:39.945 06:01:05 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:39.945 06:01:05 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:39.945 06:01:05 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:39.945 06:01:05 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:08:39.945 [2024-10-01 06:01:05.446739] spdk_dd.c:1529:main: *ERROR*: --iflags may be used only with --if 00:08:39.945 06:01:05 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@653 -- # es=22 00:08:39.945 06:01:05 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:39.945 ************************************ 00:08:39.945 END TEST dd_invalid_iflag 00:08:39.945 ************************************ 00:08:39.945 06:01:05 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:39.946 06:01:05 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:39.946 00:08:39.946 real 0m0.073s 00:08:39.946 user 0m0.040s 00:08:39.946 sys 0m0.033s 00:08:39.946 06:01:05 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:39.946 06:01:05 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@10 -- # set +x 00:08:39.946 06:01:05 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@226 -- # run_test dd_unknown_flag unknown_flag 00:08:39.946 06:01:05 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:39.946 06:01:05 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:39.946 06:01:05 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:39.946 ************************************ 00:08:39.946 START TEST dd_unknown_flag 00:08:39.946 ************************************ 00:08:39.946 06:01:05 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1125 -- # unknown_flag 00:08:39.946 06:01:05 spdk_dd.spdk_dd_negative.dd_unknown_flag -- dd/negative_dd.sh@87 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:08:39.946 06:01:05 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@650 -- # local es=0 00:08:39.946 06:01:05 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:08:39.946 06:01:05 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:39.946 06:01:05 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:39.946 06:01:05 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:39.946 06:01:05 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:39.946 06:01:05 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:39.946 06:01:05 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:39.946 06:01:05 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:39.946 06:01:05 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:39.946 06:01:05 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:08:40.204 [2024-10-01 06:01:05.574684] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:08:40.204 [2024-10-01 06:01:05.574781] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73811 ] 00:08:40.204 [2024-10-01 06:01:05.712546] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:40.204 [2024-10-01 06:01:05.753399] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:40.204 [2024-10-01 06:01:05.785096] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:40.204 [2024-10-01 06:01:05.802232] spdk_dd.c: 986:parse_flags: *ERROR*: Unknown file flag: -1 00:08:40.204 [2024-10-01 06:01:05.802299] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:40.204 [2024-10-01 06:01:05.802367] spdk_dd.c: 986:parse_flags: *ERROR*: Unknown file flag: -1 00:08:40.205 [2024-10-01 06:01:05.802384] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:40.205 [2024-10-01 06:01:05.802636] spdk_dd.c:1218:dd_run: *ERROR*: Failed to register files with io_uring: -9 (Bad file descriptor) 00:08:40.205 [2024-10-01 06:01:05.802656] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:40.205 [2024-10-01 06:01:05.802720] app.c:1046:app_stop: *NOTICE*: spdk_app_stop called twice 00:08:40.205 [2024-10-01 06:01:05.802733] app.c:1046:app_stop: *NOTICE*: spdk_app_stop called twice 00:08:40.463 [2024-10-01 06:01:05.869715] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:40.463 06:01:05 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@653 -- # es=234 00:08:40.463 06:01:05 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:40.463 06:01:05 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@662 -- # es=106 00:08:40.463 06:01:05 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@663 -- # case "$es" in 00:08:40.463 06:01:05 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@670 -- # es=1 00:08:40.463 06:01:05 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:40.463 00:08:40.463 real 0m0.429s 00:08:40.463 user 0m0.206s 00:08:40.463 sys 0m0.130s 00:08:40.463 06:01:05 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:40.463 ************************************ 00:08:40.463 END TEST dd_unknown_flag 00:08:40.463 ************************************ 00:08:40.463 06:01:05 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@10 -- # set +x 00:08:40.463 06:01:05 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@227 -- # run_test dd_invalid_json invalid_json 00:08:40.463 06:01:05 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:40.463 06:01:05 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:40.463 06:01:05 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:40.463 ************************************ 00:08:40.463 START TEST dd_invalid_json 00:08:40.463 ************************************ 00:08:40.463 06:01:06 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1125 -- # invalid_json 00:08:40.463 06:01:06 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@94 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:08:40.463 06:01:06 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@650 -- # local es=0 00:08:40.463 06:01:06 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:08:40.463 06:01:06 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@94 -- # : 00:08:40.463 06:01:06 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:40.463 06:01:06 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:40.463 06:01:06 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:40.463 06:01:06 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:40.463 06:01:06 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:40.463 06:01:06 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:40.463 06:01:06 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:40.463 06:01:06 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:40.463 06:01:06 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:08:40.463 [2024-10-01 06:01:06.063531] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:08:40.463 [2024-10-01 06:01:06.063667] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73845 ] 00:08:40.721 [2024-10-01 06:01:06.204187] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:40.721 [2024-10-01 06:01:06.248965] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:40.721 [2024-10-01 06:01:06.249051] json_config.c: 535:parse_json: *ERROR*: JSON data cannot be empty 00:08:40.722 [2024-10-01 06:01:06.249067] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:08:40.722 [2024-10-01 06:01:06.249078] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:40.722 [2024-10-01 06:01:06.249123] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:40.981 06:01:06 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@653 -- # es=234 00:08:40.981 06:01:06 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:40.981 06:01:06 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@662 -- # es=106 00:08:40.981 06:01:06 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@663 -- # case "$es" in 00:08:40.981 06:01:06 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@670 -- # es=1 00:08:40.981 06:01:06 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:40.981 00:08:40.981 real 0m0.336s 00:08:40.981 user 0m0.163s 00:08:40.981 sys 0m0.072s 00:08:40.981 ************************************ 00:08:40.981 END TEST dd_invalid_json 00:08:40.981 ************************************ 00:08:40.981 06:01:06 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:40.981 06:01:06 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@10 -- # set +x 00:08:40.981 06:01:06 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@228 -- # run_test dd_invalid_seek invalid_seek 00:08:40.981 06:01:06 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:40.981 06:01:06 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:40.981 06:01:06 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:40.981 ************************************ 00:08:40.981 START TEST dd_invalid_seek 00:08:40.981 ************************************ 00:08:40.981 06:01:06 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@1125 -- # invalid_seek 00:08:40.981 06:01:06 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@102 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:08:40.981 06:01:06 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@103 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:08:40.981 06:01:06 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@103 -- # local -A method_bdev_malloc_create_0 00:08:40.981 06:01:06 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@108 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:08:40.981 06:01:06 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@109 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:08:40.981 06:01:06 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@109 -- # local -A method_bdev_malloc_create_1 00:08:40.981 06:01:06 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@115 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --seek=513 --json /dev/fd/62 --bs=512 00:08:40.981 06:01:06 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@650 -- # local es=0 00:08:40.981 06:01:06 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --seek=513 --json /dev/fd/62 --bs=512 00:08:40.981 06:01:06 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@115 -- # gen_conf 00:08:40.981 06:01:06 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/common.sh@31 -- # xtrace_disable 00:08:40.981 06:01:06 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:40.981 06:01:06 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@10 -- # set +x 00:08:40.981 06:01:06 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:40.981 06:01:06 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:40.981 06:01:06 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:40.981 06:01:06 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:40.981 06:01:06 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:40.981 06:01:06 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:40.981 06:01:06 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:40.981 06:01:06 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --seek=513 --json /dev/fd/62 --bs=512 00:08:40.981 [2024-10-01 06:01:06.450164] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:08:40.981 [2024-10-01 06:01:06.450270] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73869 ] 00:08:40.981 { 00:08:40.981 "subsystems": [ 00:08:40.981 { 00:08:40.981 "subsystem": "bdev", 00:08:40.981 "config": [ 00:08:40.981 { 00:08:40.981 "params": { 00:08:40.981 "block_size": 512, 00:08:40.981 "num_blocks": 512, 00:08:40.981 "name": "malloc0" 00:08:40.981 }, 00:08:40.981 "method": "bdev_malloc_create" 00:08:40.981 }, 00:08:40.981 { 00:08:40.981 "params": { 00:08:40.981 "block_size": 512, 00:08:40.981 "num_blocks": 512, 00:08:40.981 "name": "malloc1" 00:08:40.981 }, 00:08:40.981 "method": "bdev_malloc_create" 00:08:40.981 }, 00:08:40.981 { 00:08:40.981 "method": "bdev_wait_for_examine" 00:08:40.981 } 00:08:40.981 ] 00:08:40.981 } 00:08:40.981 ] 00:08:40.981 } 00:08:40.981 [2024-10-01 06:01:06.591344] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:41.240 [2024-10-01 06:01:06.636006] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:41.240 [2024-10-01 06:01:06.671345] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:41.240 [2024-10-01 06:01:06.716415] spdk_dd.c:1145:dd_run: *ERROR*: --seek value too big (513) - only 512 blocks available in output 00:08:41.240 [2024-10-01 06:01:06.716490] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:41.240 [2024-10-01 06:01:06.784240] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:41.240 06:01:06 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@653 -- # es=228 00:08:41.240 06:01:06 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:41.240 06:01:06 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@662 -- # es=100 00:08:41.240 06:01:06 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@663 -- # case "$es" in 00:08:41.240 06:01:06 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@670 -- # es=1 00:08:41.240 06:01:06 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:41.240 00:08:41.240 real 0m0.461s 00:08:41.240 user 0m0.295s 00:08:41.240 sys 0m0.128s 00:08:41.240 ************************************ 00:08:41.240 END TEST dd_invalid_seek 00:08:41.240 ************************************ 00:08:41.240 06:01:06 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:41.240 06:01:06 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@10 -- # set +x 00:08:41.499 06:01:06 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@229 -- # run_test dd_invalid_skip invalid_skip 00:08:41.499 06:01:06 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:41.499 06:01:06 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:41.499 06:01:06 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:41.499 ************************************ 00:08:41.499 START TEST dd_invalid_skip 00:08:41.499 ************************************ 00:08:41.499 06:01:06 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@1125 -- # invalid_skip 00:08:41.499 06:01:06 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@125 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:08:41.499 06:01:06 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@126 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:08:41.499 06:01:06 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@126 -- # local -A method_bdev_malloc_create_0 00:08:41.499 06:01:06 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@131 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:08:41.499 06:01:06 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@132 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:08:41.499 06:01:06 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@132 -- # local -A method_bdev_malloc_create_1 00:08:41.499 06:01:06 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@138 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --skip=513 --json /dev/fd/62 --bs=512 00:08:41.499 06:01:06 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@650 -- # local es=0 00:08:41.499 06:01:06 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@138 -- # gen_conf 00:08:41.499 06:01:06 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --skip=513 --json /dev/fd/62 --bs=512 00:08:41.499 06:01:06 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/common.sh@31 -- # xtrace_disable 00:08:41.499 06:01:06 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:41.499 06:01:06 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@10 -- # set +x 00:08:41.499 06:01:06 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:41.499 06:01:06 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:41.499 06:01:06 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:41.499 06:01:06 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:41.499 06:01:06 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:41.499 06:01:06 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:41.499 06:01:06 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:41.499 06:01:06 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --skip=513 --json /dev/fd/62 --bs=512 00:08:41.499 [2024-10-01 06:01:06.957695] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:08:41.499 [2024-10-01 06:01:06.957953] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73908 ] 00:08:41.499 { 00:08:41.499 "subsystems": [ 00:08:41.499 { 00:08:41.499 "subsystem": "bdev", 00:08:41.499 "config": [ 00:08:41.499 { 00:08:41.499 "params": { 00:08:41.499 "block_size": 512, 00:08:41.499 "num_blocks": 512, 00:08:41.499 "name": "malloc0" 00:08:41.499 }, 00:08:41.499 "method": "bdev_malloc_create" 00:08:41.499 }, 00:08:41.499 { 00:08:41.499 "params": { 00:08:41.499 "block_size": 512, 00:08:41.499 "num_blocks": 512, 00:08:41.499 "name": "malloc1" 00:08:41.499 }, 00:08:41.499 "method": "bdev_malloc_create" 00:08:41.499 }, 00:08:41.499 { 00:08:41.499 "method": "bdev_wait_for_examine" 00:08:41.499 } 00:08:41.499 ] 00:08:41.499 } 00:08:41.499 ] 00:08:41.499 } 00:08:41.499 [2024-10-01 06:01:07.096889] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:41.757 [2024-10-01 06:01:07.133037] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:41.757 [2024-10-01 06:01:07.160358] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:41.757 [2024-10-01 06:01:07.200412] spdk_dd.c:1102:dd_run: *ERROR*: --skip value too big (513) - only 512 blocks available in input 00:08:41.757 [2024-10-01 06:01:07.200481] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:41.757 [2024-10-01 06:01:07.257032] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:41.757 06:01:07 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@653 -- # es=228 00:08:41.757 06:01:07 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:41.757 06:01:07 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@662 -- # es=100 00:08:41.757 ************************************ 00:08:41.757 END TEST dd_invalid_skip 00:08:41.757 ************************************ 00:08:41.757 06:01:07 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@663 -- # case "$es" in 00:08:41.757 06:01:07 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@670 -- # es=1 00:08:41.757 06:01:07 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:41.757 00:08:41.757 real 0m0.423s 00:08:41.757 user 0m0.262s 00:08:41.757 sys 0m0.118s 00:08:41.757 06:01:07 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:41.757 06:01:07 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@10 -- # set +x 00:08:41.757 06:01:07 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@230 -- # run_test dd_invalid_input_count invalid_input_count 00:08:41.757 06:01:07 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:41.757 06:01:07 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:41.757 06:01:07 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:41.757 ************************************ 00:08:41.757 START TEST dd_invalid_input_count 00:08:41.757 ************************************ 00:08:42.016 06:01:07 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@1125 -- # invalid_input_count 00:08:42.017 06:01:07 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@149 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:08:42.017 06:01:07 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@150 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:08:42.017 06:01:07 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@150 -- # local -A method_bdev_malloc_create_0 00:08:42.017 06:01:07 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@155 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:08:42.017 06:01:07 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@156 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:08:42.017 06:01:07 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@156 -- # local -A method_bdev_malloc_create_1 00:08:42.017 06:01:07 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@162 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --count=513 --json /dev/fd/62 --bs=512 00:08:42.017 06:01:07 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@650 -- # local es=0 00:08:42.017 06:01:07 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --count=513 --json /dev/fd/62 --bs=512 00:08:42.017 06:01:07 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@162 -- # gen_conf 00:08:42.017 06:01:07 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:42.017 06:01:07 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/common.sh@31 -- # xtrace_disable 00:08:42.017 06:01:07 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@10 -- # set +x 00:08:42.017 06:01:07 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:42.017 06:01:07 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:42.017 06:01:07 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:42.017 06:01:07 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:42.017 06:01:07 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:42.017 06:01:07 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:42.017 06:01:07 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:42.017 06:01:07 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --count=513 --json /dev/fd/62 --bs=512 00:08:42.017 [2024-10-01 06:01:07.450755] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:08:42.017 [2024-10-01 06:01:07.450873] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73936 ] 00:08:42.017 { 00:08:42.017 "subsystems": [ 00:08:42.017 { 00:08:42.017 "subsystem": "bdev", 00:08:42.017 "config": [ 00:08:42.017 { 00:08:42.017 "params": { 00:08:42.017 "block_size": 512, 00:08:42.017 "num_blocks": 512, 00:08:42.017 "name": "malloc0" 00:08:42.017 }, 00:08:42.017 "method": "bdev_malloc_create" 00:08:42.017 }, 00:08:42.017 { 00:08:42.017 "params": { 00:08:42.017 "block_size": 512, 00:08:42.017 "num_blocks": 512, 00:08:42.017 "name": "malloc1" 00:08:42.017 }, 00:08:42.017 "method": "bdev_malloc_create" 00:08:42.017 }, 00:08:42.017 { 00:08:42.017 "method": "bdev_wait_for_examine" 00:08:42.017 } 00:08:42.017 ] 00:08:42.017 } 00:08:42.017 ] 00:08:42.017 } 00:08:42.017 [2024-10-01 06:01:07.590658] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:42.017 [2024-10-01 06:01:07.623738] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:42.276 [2024-10-01 06:01:07.651928] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:42.276 [2024-10-01 06:01:07.693040] spdk_dd.c:1110:dd_run: *ERROR*: --count value too big (513) - only 512 blocks available from input 00:08:42.276 [2024-10-01 06:01:07.693111] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:42.276 [2024-10-01 06:01:07.752495] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:42.276 06:01:07 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@653 -- # es=228 00:08:42.276 06:01:07 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:42.276 06:01:07 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@662 -- # es=100 00:08:42.276 06:01:07 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@663 -- # case "$es" in 00:08:42.276 06:01:07 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@670 -- # es=1 00:08:42.276 06:01:07 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:42.276 00:08:42.276 real 0m0.447s 00:08:42.276 user 0m0.320s 00:08:42.276 sys 0m0.113s 00:08:42.276 06:01:07 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:42.276 ************************************ 00:08:42.276 END TEST dd_invalid_input_count 00:08:42.276 ************************************ 00:08:42.276 06:01:07 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@10 -- # set +x 00:08:42.276 06:01:07 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@231 -- # run_test dd_invalid_output_count invalid_output_count 00:08:42.276 06:01:07 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:42.276 06:01:07 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:42.276 06:01:07 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:42.276 ************************************ 00:08:42.276 START TEST dd_invalid_output_count 00:08:42.276 ************************************ 00:08:42.276 06:01:07 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@1125 -- # invalid_output_count 00:08:42.276 06:01:07 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@173 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:08:42.276 06:01:07 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@174 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:08:42.276 06:01:07 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@174 -- # local -A method_bdev_malloc_create_0 00:08:42.276 06:01:07 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@180 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=malloc0 --count=513 --json /dev/fd/62 --bs=512 00:08:42.276 06:01:07 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@650 -- # local es=0 00:08:42.276 06:01:07 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=malloc0 --count=513 --json /dev/fd/62 --bs=512 00:08:42.276 06:01:07 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:42.276 06:01:07 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@180 -- # gen_conf 00:08:42.276 06:01:07 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/common.sh@31 -- # xtrace_disable 00:08:42.276 06:01:07 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:42.276 06:01:07 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:42.276 06:01:07 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@10 -- # set +x 00:08:42.276 06:01:07 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:42.276 06:01:07 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:42.276 06:01:07 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:42.276 06:01:07 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:42.276 06:01:07 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:42.276 06:01:07 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=malloc0 --count=513 --json /dev/fd/62 --bs=512 00:08:42.535 { 00:08:42.535 "subsystems": [ 00:08:42.535 { 00:08:42.535 "subsystem": "bdev", 00:08:42.535 "config": [ 00:08:42.535 { 00:08:42.535 "params": { 00:08:42.535 "block_size": 512, 00:08:42.535 "num_blocks": 512, 00:08:42.535 "name": "malloc0" 00:08:42.535 }, 00:08:42.535 "method": "bdev_malloc_create" 00:08:42.535 }, 00:08:42.535 { 00:08:42.535 "method": "bdev_wait_for_examine" 00:08:42.535 } 00:08:42.535 ] 00:08:42.535 } 00:08:42.535 ] 00:08:42.535 } 00:08:42.535 [2024-10-01 06:01:07.932654] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:08:42.535 [2024-10-01 06:01:07.932749] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73975 ] 00:08:42.535 [2024-10-01 06:01:08.068008] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:42.535 [2024-10-01 06:01:08.098884] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:42.535 [2024-10-01 06:01:08.126423] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:42.794 [2024-10-01 06:01:08.160773] spdk_dd.c:1152:dd_run: *ERROR*: --count value too big (513) - only 512 blocks available in output 00:08:42.794 [2024-10-01 06:01:08.160860] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:42.794 [2024-10-01 06:01:08.217986] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:42.794 06:01:08 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@653 -- # es=228 00:08:42.794 06:01:08 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:42.794 06:01:08 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@662 -- # es=100 00:08:42.794 06:01:08 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@663 -- # case "$es" in 00:08:42.794 06:01:08 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@670 -- # es=1 00:08:42.794 06:01:08 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:42.794 00:08:42.794 real 0m0.406s 00:08:42.794 user 0m0.250s 00:08:42.794 sys 0m0.100s 00:08:42.794 06:01:08 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:42.794 06:01:08 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@10 -- # set +x 00:08:42.794 ************************************ 00:08:42.794 END TEST dd_invalid_output_count 00:08:42.794 ************************************ 00:08:42.794 06:01:08 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@232 -- # run_test dd_bs_not_multiple bs_not_multiple 00:08:42.794 06:01:08 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:42.794 06:01:08 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:42.794 06:01:08 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:42.794 ************************************ 00:08:42.794 START TEST dd_bs_not_multiple 00:08:42.794 ************************************ 00:08:42.794 06:01:08 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@1125 -- # bs_not_multiple 00:08:42.794 06:01:08 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@190 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:08:42.794 06:01:08 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@191 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:08:42.794 06:01:08 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@191 -- # local -A method_bdev_malloc_create_0 00:08:42.794 06:01:08 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@196 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:08:42.794 06:01:08 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@197 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:08:42.794 06:01:08 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@197 -- # local -A method_bdev_malloc_create_1 00:08:42.794 06:01:08 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@203 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --bs=513 --json /dev/fd/62 00:08:42.794 06:01:08 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@650 -- # local es=0 00:08:42.794 06:01:08 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --bs=513 --json /dev/fd/62 00:08:42.794 06:01:08 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@203 -- # gen_conf 00:08:42.794 06:01:08 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:42.794 06:01:08 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/common.sh@31 -- # xtrace_disable 00:08:42.794 06:01:08 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@10 -- # set +x 00:08:42.794 06:01:08 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:42.794 06:01:08 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:42.794 06:01:08 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:42.794 06:01:08 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:42.794 06:01:08 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:42.794 06:01:08 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:42.794 06:01:08 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:42.794 06:01:08 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --bs=513 --json /dev/fd/62 00:08:42.794 [2024-10-01 06:01:08.389504] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:08:42.794 [2024-10-01 06:01:08.389763] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74001 ] 00:08:42.794 { 00:08:42.794 "subsystems": [ 00:08:42.794 { 00:08:42.794 "subsystem": "bdev", 00:08:42.794 "config": [ 00:08:42.794 { 00:08:42.794 "params": { 00:08:42.794 "block_size": 512, 00:08:42.794 "num_blocks": 512, 00:08:42.794 "name": "malloc0" 00:08:42.794 }, 00:08:42.794 "method": "bdev_malloc_create" 00:08:42.794 }, 00:08:42.794 { 00:08:42.794 "params": { 00:08:42.794 "block_size": 512, 00:08:42.794 "num_blocks": 512, 00:08:42.794 "name": "malloc1" 00:08:42.794 }, 00:08:42.794 "method": "bdev_malloc_create" 00:08:42.794 }, 00:08:42.794 { 00:08:42.794 "method": "bdev_wait_for_examine" 00:08:42.794 } 00:08:42.794 ] 00:08:42.794 } 00:08:42.794 ] 00:08:42.794 } 00:08:43.053 [2024-10-01 06:01:08.529491] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:43.053 [2024-10-01 06:01:08.563727] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:43.053 [2024-10-01 06:01:08.590293] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:43.053 [2024-10-01 06:01:08.630544] spdk_dd.c:1168:dd_run: *ERROR*: --bs value must be a multiple of input native block size (512) 00:08:43.053 [2024-10-01 06:01:08.630611] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:43.312 [2024-10-01 06:01:08.686400] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:43.312 06:01:08 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@653 -- # es=234 00:08:43.312 06:01:08 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:43.312 06:01:08 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@662 -- # es=106 00:08:43.312 ************************************ 00:08:43.312 END TEST dd_bs_not_multiple 00:08:43.312 ************************************ 00:08:43.312 06:01:08 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@663 -- # case "$es" in 00:08:43.312 06:01:08 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@670 -- # es=1 00:08:43.312 06:01:08 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:43.312 00:08:43.312 real 0m0.419s 00:08:43.312 user 0m0.268s 00:08:43.312 sys 0m0.110s 00:08:43.312 06:01:08 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:43.312 06:01:08 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@10 -- # set +x 00:08:43.312 ************************************ 00:08:43.312 END TEST spdk_dd_negative 00:08:43.312 ************************************ 00:08:43.312 00:08:43.312 real 0m5.077s 00:08:43.312 user 0m2.754s 00:08:43.312 sys 0m1.734s 00:08:43.312 06:01:08 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:43.312 06:01:08 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:08:43.312 ************************************ 00:08:43.312 END TEST spdk_dd 00:08:43.312 ************************************ 00:08:43.312 00:08:43.312 real 1m4.195s 00:08:43.312 user 0m40.435s 00:08:43.312 sys 0m27.530s 00:08:43.312 06:01:08 spdk_dd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:43.312 06:01:08 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:08:43.312 06:01:08 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:08:43.312 06:01:08 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:08:43.312 06:01:08 -- spdk/autotest.sh@256 -- # timing_exit lib 00:08:43.312 06:01:08 -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:43.312 06:01:08 -- common/autotest_common.sh@10 -- # set +x 00:08:43.312 06:01:08 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:08:43.312 06:01:08 -- spdk/autotest.sh@263 -- # '[' 0 -eq 1 ']' 00:08:43.312 06:01:08 -- spdk/autotest.sh@272 -- # '[' 1 -eq 1 ']' 00:08:43.312 06:01:08 -- spdk/autotest.sh@273 -- # export NET_TYPE 00:08:43.312 06:01:08 -- spdk/autotest.sh@276 -- # '[' tcp = rdma ']' 00:08:43.312 06:01:08 -- spdk/autotest.sh@279 -- # '[' tcp = tcp ']' 00:08:43.312 06:01:08 -- spdk/autotest.sh@280 -- # run_test nvmf_tcp /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:08:43.312 06:01:08 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:43.312 06:01:08 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:43.312 06:01:08 -- common/autotest_common.sh@10 -- # set +x 00:08:43.312 ************************************ 00:08:43.312 START TEST nvmf_tcp 00:08:43.312 ************************************ 00:08:43.312 06:01:08 nvmf_tcp -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:08:43.580 * Looking for test storage... 00:08:43.580 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:08:43.580 06:01:08 nvmf_tcp -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:43.580 06:01:09 nvmf_tcp -- common/autotest_common.sh@1681 -- # lcov --version 00:08:43.580 06:01:09 nvmf_tcp -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:43.580 06:01:09 nvmf_tcp -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:43.580 06:01:09 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:43.580 06:01:09 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:43.580 06:01:09 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:43.580 06:01:09 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:08:43.580 06:01:09 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:08:43.580 06:01:09 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:08:43.580 06:01:09 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:08:43.580 06:01:09 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:08:43.580 06:01:09 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:08:43.580 06:01:09 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:08:43.580 06:01:09 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:43.580 06:01:09 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:08:43.580 06:01:09 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:08:43.580 06:01:09 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:43.580 06:01:09 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:43.580 06:01:09 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:08:43.580 06:01:09 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:08:43.580 06:01:09 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:43.580 06:01:09 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:08:43.580 06:01:09 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:08:43.580 06:01:09 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:08:43.580 06:01:09 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:08:43.580 06:01:09 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:43.580 06:01:09 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:08:43.580 06:01:09 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:08:43.580 06:01:09 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:43.580 06:01:09 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:43.580 06:01:09 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:08:43.580 06:01:09 nvmf_tcp -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:43.580 06:01:09 nvmf_tcp -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:43.580 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:43.580 --rc genhtml_branch_coverage=1 00:08:43.580 --rc genhtml_function_coverage=1 00:08:43.580 --rc genhtml_legend=1 00:08:43.580 --rc geninfo_all_blocks=1 00:08:43.580 --rc geninfo_unexecuted_blocks=1 00:08:43.580 00:08:43.580 ' 00:08:43.580 06:01:09 nvmf_tcp -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:43.580 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:43.580 --rc genhtml_branch_coverage=1 00:08:43.580 --rc genhtml_function_coverage=1 00:08:43.580 --rc genhtml_legend=1 00:08:43.580 --rc geninfo_all_blocks=1 00:08:43.580 --rc geninfo_unexecuted_blocks=1 00:08:43.580 00:08:43.580 ' 00:08:43.580 06:01:09 nvmf_tcp -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:43.580 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:43.580 --rc genhtml_branch_coverage=1 00:08:43.580 --rc genhtml_function_coverage=1 00:08:43.580 --rc genhtml_legend=1 00:08:43.580 --rc geninfo_all_blocks=1 00:08:43.580 --rc geninfo_unexecuted_blocks=1 00:08:43.580 00:08:43.580 ' 00:08:43.580 06:01:09 nvmf_tcp -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:43.580 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:43.580 --rc genhtml_branch_coverage=1 00:08:43.580 --rc genhtml_function_coverage=1 00:08:43.580 --rc genhtml_legend=1 00:08:43.580 --rc geninfo_all_blocks=1 00:08:43.580 --rc geninfo_unexecuted_blocks=1 00:08:43.580 00:08:43.580 ' 00:08:43.580 06:01:09 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:08:43.580 06:01:09 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:08:43.580 06:01:09 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:08:43.580 06:01:09 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:43.580 06:01:09 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:43.580 06:01:09 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:43.580 ************************************ 00:08:43.580 START TEST nvmf_target_core 00:08:43.580 ************************************ 00:08:43.580 06:01:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:08:43.844 * Looking for test storage... 00:08:43.844 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:08:43.844 06:01:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:43.844 06:01:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:43.844 06:01:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1681 -- # lcov --version 00:08:43.844 06:01:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:43.844 06:01:09 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:43.844 06:01:09 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:43.844 06:01:09 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:43.844 06:01:09 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:08:43.844 06:01:09 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:08:43.844 06:01:09 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:08:43.844 06:01:09 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:08:43.844 06:01:09 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:08:43.844 06:01:09 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:08:43.844 06:01:09 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:08:43.844 06:01:09 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:43.844 06:01:09 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:08:43.844 06:01:09 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:08:43.844 06:01:09 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:43.844 06:01:09 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:43.844 06:01:09 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:08:43.844 06:01:09 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:08:43.844 06:01:09 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:43.844 06:01:09 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:08:43.844 06:01:09 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:08:43.844 06:01:09 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:08:43.844 06:01:09 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:08:43.844 06:01:09 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:43.844 06:01:09 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:08:43.844 06:01:09 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:08:43.844 06:01:09 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:43.844 06:01:09 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:43.844 06:01:09 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:08:43.844 06:01:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:43.844 06:01:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:43.844 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:43.844 --rc genhtml_branch_coverage=1 00:08:43.844 --rc genhtml_function_coverage=1 00:08:43.844 --rc genhtml_legend=1 00:08:43.844 --rc geninfo_all_blocks=1 00:08:43.844 --rc geninfo_unexecuted_blocks=1 00:08:43.844 00:08:43.844 ' 00:08:43.844 06:01:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:43.844 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:43.844 --rc genhtml_branch_coverage=1 00:08:43.844 --rc genhtml_function_coverage=1 00:08:43.844 --rc genhtml_legend=1 00:08:43.844 --rc geninfo_all_blocks=1 00:08:43.844 --rc geninfo_unexecuted_blocks=1 00:08:43.844 00:08:43.844 ' 00:08:43.844 06:01:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:43.844 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:43.844 --rc genhtml_branch_coverage=1 00:08:43.844 --rc genhtml_function_coverage=1 00:08:43.844 --rc genhtml_legend=1 00:08:43.844 --rc geninfo_all_blocks=1 00:08:43.844 --rc geninfo_unexecuted_blocks=1 00:08:43.844 00:08:43.844 ' 00:08:43.844 06:01:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:43.844 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:43.844 --rc genhtml_branch_coverage=1 00:08:43.844 --rc genhtml_function_coverage=1 00:08:43.844 --rc genhtml_legend=1 00:08:43.844 --rc geninfo_all_blocks=1 00:08:43.844 --rc geninfo_unexecuted_blocks=1 00:08:43.844 00:08:43.844 ' 00:08:43.844 06:01:09 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:08:43.844 06:01:09 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:08:43.844 06:01:09 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:43.844 06:01:09 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:08:43.844 06:01:09 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:43.844 06:01:09 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:43.844 06:01:09 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:43.844 06:01:09 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:43.844 06:01:09 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:43.844 06:01:09 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:43.844 06:01:09 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:43.844 06:01:09 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:43.844 06:01:09 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:43.844 06:01:09 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:43.844 06:01:09 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 00:08:43.844 06:01:09 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=a979a798-a221-4879-b3c4-5aaa753fde06 00:08:43.844 06:01:09 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:43.844 06:01:09 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:43.844 06:01:09 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:43.844 06:01:09 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:43.844 06:01:09 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:43.844 06:01:09 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:08:43.844 06:01:09 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:43.844 06:01:09 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:43.844 06:01:09 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:43.844 06:01:09 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:43.844 06:01:09 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:43.844 06:01:09 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:43.844 06:01:09 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:08:43.844 06:01:09 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:43.844 06:01:09 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:08:43.844 06:01:09 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:43.844 06:01:09 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:43.844 06:01:09 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:43.844 06:01:09 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:43.844 06:01:09 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:43.845 06:01:09 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:43.845 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:43.845 06:01:09 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:43.845 06:01:09 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:43.845 06:01:09 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:43.845 06:01:09 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:08:43.845 06:01:09 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:08:43.845 06:01:09 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 1 -eq 0 ]] 00:08:43.845 06:01:09 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:08:43.845 06:01:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:43.845 06:01:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:43.845 06:01:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:43.845 ************************************ 00:08:43.845 START TEST nvmf_host_management 00:08:43.845 ************************************ 00:08:43.845 06:01:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:08:43.845 * Looking for test storage... 00:08:43.845 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:43.845 06:01:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:43.845 06:01:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1681 -- # lcov --version 00:08:43.845 06:01:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:44.105 06:01:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:44.105 06:01:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:44.105 06:01:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:44.105 06:01:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:44.105 06:01:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:08:44.105 06:01:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:08:44.105 06:01:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:08:44.105 06:01:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:08:44.105 06:01:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:08:44.105 06:01:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:08:44.105 06:01:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:08:44.105 06:01:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:44.105 06:01:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:08:44.105 06:01:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:08:44.105 06:01:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:44.105 06:01:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:44.105 06:01:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:08:44.105 06:01:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:08:44.105 06:01:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:44.105 06:01:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:08:44.105 06:01:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:08:44.105 06:01:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:08:44.105 06:01:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:08:44.105 06:01:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:44.105 06:01:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:08:44.105 06:01:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:08:44.105 06:01:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:44.105 06:01:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:44.105 06:01:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:08:44.105 06:01:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:44.105 06:01:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:44.105 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:44.105 --rc genhtml_branch_coverage=1 00:08:44.105 --rc genhtml_function_coverage=1 00:08:44.105 --rc genhtml_legend=1 00:08:44.105 --rc geninfo_all_blocks=1 00:08:44.105 --rc geninfo_unexecuted_blocks=1 00:08:44.105 00:08:44.105 ' 00:08:44.105 06:01:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:44.105 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:44.105 --rc genhtml_branch_coverage=1 00:08:44.105 --rc genhtml_function_coverage=1 00:08:44.105 --rc genhtml_legend=1 00:08:44.105 --rc geninfo_all_blocks=1 00:08:44.105 --rc geninfo_unexecuted_blocks=1 00:08:44.105 00:08:44.105 ' 00:08:44.105 06:01:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:44.105 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:44.105 --rc genhtml_branch_coverage=1 00:08:44.105 --rc genhtml_function_coverage=1 00:08:44.105 --rc genhtml_legend=1 00:08:44.105 --rc geninfo_all_blocks=1 00:08:44.105 --rc geninfo_unexecuted_blocks=1 00:08:44.105 00:08:44.105 ' 00:08:44.105 06:01:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:44.105 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:44.105 --rc genhtml_branch_coverage=1 00:08:44.105 --rc genhtml_function_coverage=1 00:08:44.105 --rc genhtml_legend=1 00:08:44.105 --rc geninfo_all_blocks=1 00:08:44.105 --rc geninfo_unexecuted_blocks=1 00:08:44.105 00:08:44.105 ' 00:08:44.105 06:01:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:44.105 06:01:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:08:44.105 06:01:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:44.105 06:01:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:44.105 06:01:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:44.105 06:01:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:44.105 06:01:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:44.105 06:01:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:44.105 06:01:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:44.105 06:01:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:44.105 06:01:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:44.105 06:01:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:44.105 06:01:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 00:08:44.105 06:01:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=a979a798-a221-4879-b3c4-5aaa753fde06 00:08:44.105 06:01:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:44.105 06:01:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:44.105 06:01:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:44.105 06:01:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:44.105 06:01:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:44.105 06:01:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:08:44.105 06:01:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:44.105 06:01:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:44.105 06:01:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:44.106 06:01:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:44.106 06:01:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:44.106 06:01:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:44.106 06:01:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:08:44.106 06:01:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:44.106 06:01:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:08:44.106 06:01:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:44.106 06:01:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:44.106 06:01:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:44.106 06:01:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:44.106 06:01:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:44.106 06:01:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:44.106 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:44.106 06:01:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:44.106 06:01:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:44.106 06:01:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:44.106 06:01:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:44.106 06:01:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:44.106 06:01:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:08:44.106 06:01:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:08:44.106 06:01:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:44.106 06:01:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@472 -- # prepare_net_devs 00:08:44.106 06:01:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@434 -- # local -g is_hw=no 00:08:44.106 06:01:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@436 -- # remove_spdk_ns 00:08:44.106 06:01:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:44.106 06:01:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:44.106 06:01:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:44.106 06:01:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:08:44.106 06:01:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:08:44.106 06:01:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:08:44.106 06:01:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:08:44.106 06:01:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:08:44.106 06:01:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@456 -- # nvmf_veth_init 00:08:44.106 06:01:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:44.106 06:01:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:08:44.106 06:01:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:08:44.106 06:01:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:08:44.106 06:01:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:44.106 06:01:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:08:44.106 06:01:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:44.106 06:01:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:08:44.106 06:01:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:44.106 06:01:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:08:44.106 06:01:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:44.106 06:01:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:44.106 06:01:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:44.106 06:01:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:44.106 06:01:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:44.106 06:01:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:44.106 06:01:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:08:44.106 Cannot find device "nvmf_init_br" 00:08:44.106 06:01:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@162 -- # true 00:08:44.106 06:01:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:08:44.106 Cannot find device "nvmf_init_br2" 00:08:44.106 06:01:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@163 -- # true 00:08:44.106 06:01:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:08:44.106 Cannot find device "nvmf_tgt_br" 00:08:44.106 06:01:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@164 -- # true 00:08:44.106 06:01:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:08:44.106 Cannot find device "nvmf_tgt_br2" 00:08:44.106 06:01:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@165 -- # true 00:08:44.106 06:01:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:08:44.106 Cannot find device "nvmf_init_br" 00:08:44.106 06:01:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@166 -- # true 00:08:44.106 06:01:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:08:44.106 Cannot find device "nvmf_init_br2" 00:08:44.106 06:01:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@167 -- # true 00:08:44.106 06:01:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:08:44.106 Cannot find device "nvmf_tgt_br" 00:08:44.106 06:01:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@168 -- # true 00:08:44.106 06:01:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:08:44.106 Cannot find device "nvmf_tgt_br2" 00:08:44.106 06:01:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@169 -- # true 00:08:44.106 06:01:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:08:44.106 Cannot find device "nvmf_br" 00:08:44.106 06:01:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@170 -- # true 00:08:44.106 06:01:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:08:44.106 Cannot find device "nvmf_init_if" 00:08:44.106 06:01:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@171 -- # true 00:08:44.106 06:01:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:08:44.365 Cannot find device "nvmf_init_if2" 00:08:44.365 06:01:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@172 -- # true 00:08:44.365 06:01:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:44.365 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:44.365 06:01:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@173 -- # true 00:08:44.365 06:01:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:44.365 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:44.365 06:01:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@174 -- # true 00:08:44.365 06:01:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:08:44.365 06:01:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:44.365 06:01:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:08:44.365 06:01:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:44.365 06:01:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:44.365 06:01:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:44.365 06:01:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:44.365 06:01:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:44.365 06:01:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:08:44.365 06:01:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:08:44.365 06:01:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:08:44.365 06:01:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:08:44.365 06:01:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:08:44.365 06:01:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:08:44.365 06:01:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:08:44.365 06:01:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:08:44.365 06:01:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:08:44.365 06:01:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:44.365 06:01:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:44.365 06:01:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:44.365 06:01:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:08:44.365 06:01:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:08:44.365 06:01:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:08:44.365 06:01:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:08:44.625 06:01:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:44.625 06:01:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:44.625 06:01:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:44.625 06:01:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:08:44.625 06:01:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:08:44.625 06:01:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:08:44.625 06:01:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:44.625 06:01:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:08:44.625 06:01:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:08:44.625 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:44.625 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.082 ms 00:08:44.625 00:08:44.625 --- 10.0.0.3 ping statistics --- 00:08:44.625 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:44.625 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:08:44.625 06:01:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:08:44.625 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:08:44.625 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.078 ms 00:08:44.625 00:08:44.625 --- 10.0.0.4 ping statistics --- 00:08:44.625 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:44.625 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:08:44.625 06:01:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:44.625 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:44.625 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:08:44.625 00:08:44.625 --- 10.0.0.1 ping statistics --- 00:08:44.625 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:44.625 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:08:44.625 06:01:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:08:44.625 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:44.625 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.045 ms 00:08:44.625 00:08:44.625 --- 10.0.0.2 ping statistics --- 00:08:44.625 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:44.625 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:08:44.625 06:01:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:44.625 06:01:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@457 -- # return 0 00:08:44.625 06:01:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:08:44.625 06:01:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:44.625 06:01:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:08:44.625 06:01:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:08:44.625 06:01:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:44.625 06:01:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:08:44.625 06:01:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:08:44.625 06:01:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:08:44.625 06:01:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:08:44.625 06:01:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:08:44.625 06:01:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:08:44.625 06:01:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:44.625 06:01:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:44.625 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:44.625 06:01:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@505 -- # nvmfpid=74348 00:08:44.625 06:01:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@506 -- # waitforlisten 74348 00:08:44.625 06:01:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:08:44.625 06:01:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 74348 ']' 00:08:44.625 06:01:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:44.625 06:01:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:44.625 06:01:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:44.625 06:01:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:44.625 06:01:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:44.625 [2024-10-01 06:01:10.190953] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:08:44.625 [2024-10-01 06:01:10.191229] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:44.884 [2024-10-01 06:01:10.328662] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:44.884 [2024-10-01 06:01:10.375055] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:44.884 [2024-10-01 06:01:10.375335] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:44.884 [2024-10-01 06:01:10.375784] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:44.884 [2024-10-01 06:01:10.376078] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:44.884 [2024-10-01 06:01:10.376352] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:44.884 [2024-10-01 06:01:10.376786] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:08:44.884 [2024-10-01 06:01:10.376868] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:08:44.884 [2024-10-01 06:01:10.376999] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:08:44.884 [2024-10-01 06:01:10.377003] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:08:44.884 [2024-10-01 06:01:10.411553] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:44.884 06:01:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:44.884 06:01:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:08:44.884 06:01:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:08:44.884 06:01:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:44.884 06:01:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:45.142 06:01:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:45.143 06:01:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:45.143 06:01:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.143 06:01:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:45.143 [2024-10-01 06:01:10.514500] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:45.143 06:01:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.143 06:01:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:08:45.143 06:01:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:45.143 06:01:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:45.143 06:01:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:08:45.143 06:01:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:08:45.143 06:01:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:08:45.143 06:01:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.143 06:01:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:45.143 Malloc0 00:08:45.143 [2024-10-01 06:01:10.576497] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:08:45.143 06:01:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.143 06:01:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:08:45.143 06:01:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:45.143 06:01:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:45.143 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:45.143 06:01:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=74389 00:08:45.143 06:01:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 74389 /var/tmp/bdevperf.sock 00:08:45.143 06:01:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 74389 ']' 00:08:45.143 06:01:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:45.143 06:01:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:45.143 06:01:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:45.143 06:01:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:08:45.143 06:01:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # config=() 00:08:45.143 06:01:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:08:45.143 06:01:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:45.143 06:01:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:45.143 06:01:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # local subsystem config 00:08:45.143 06:01:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:08:45.143 06:01:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:08:45.143 { 00:08:45.143 "params": { 00:08:45.143 "name": "Nvme$subsystem", 00:08:45.143 "trtype": "$TEST_TRANSPORT", 00:08:45.143 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:45.143 "adrfam": "ipv4", 00:08:45.143 "trsvcid": "$NVMF_PORT", 00:08:45.143 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:45.143 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:45.143 "hdgst": ${hdgst:-false}, 00:08:45.143 "ddgst": ${ddgst:-false} 00:08:45.143 }, 00:08:45.143 "method": "bdev_nvme_attach_controller" 00:08:45.143 } 00:08:45.143 EOF 00:08:45.143 )") 00:08:45.143 06:01:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@578 -- # cat 00:08:45.143 06:01:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@580 -- # jq . 00:08:45.143 06:01:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@581 -- # IFS=, 00:08:45.143 06:01:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:08:45.143 "params": { 00:08:45.143 "name": "Nvme0", 00:08:45.143 "trtype": "tcp", 00:08:45.143 "traddr": "10.0.0.3", 00:08:45.143 "adrfam": "ipv4", 00:08:45.143 "trsvcid": "4420", 00:08:45.143 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:45.143 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:08:45.143 "hdgst": false, 00:08:45.143 "ddgst": false 00:08:45.143 }, 00:08:45.143 "method": "bdev_nvme_attach_controller" 00:08:45.143 }' 00:08:45.143 [2024-10-01 06:01:10.679462] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:08:45.143 [2024-10-01 06:01:10.679548] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74389 ] 00:08:45.401 [2024-10-01 06:01:10.844513] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:45.401 [2024-10-01 06:01:10.894546] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:45.401 [2024-10-01 06:01:10.938049] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:45.661 Running I/O for 10 seconds... 00:08:45.661 06:01:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:45.661 06:01:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:08:45.661 06:01:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:08:45.661 06:01:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.661 06:01:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:45.661 06:01:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.661 06:01:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:45.662 06:01:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:08:45.662 06:01:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:08:45.662 06:01:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:08:45.662 06:01:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:08:45.662 06:01:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:08:45.662 06:01:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:08:45.662 06:01:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:08:45.662 06:01:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:08:45.662 06:01:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:08:45.662 06:01:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.662 06:01:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:45.662 06:01:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.662 06:01:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:08:45.662 06:01:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:08:45.662 06:01:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:08:45.921 06:01:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:08:45.921 06:01:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:08:45.921 06:01:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:08:45.921 06:01:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:08:45.921 06:01:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.921 06:01:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:45.921 06:01:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.921 06:01:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=579 00:08:45.921 06:01:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 579 -ge 100 ']' 00:08:45.921 06:01:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:08:45.921 06:01:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:08:45.921 06:01:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:08:45.921 06:01:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:08:45.921 06:01:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.921 06:01:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:45.921 [2024-10-01 06:01:11.484480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:89984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:45.921 [2024-10-01 06:01:11.484708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:45.922 [2024-10-01 06:01:11.484983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:81920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:45.922 [2024-10-01 06:01:11.485127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:45.922 [2024-10-01 06:01:11.485150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:82048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:45.922 [2024-10-01 06:01:11.485160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:45.922 [2024-10-01 06:01:11.485172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:82176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:45.922 [2024-10-01 06:01:11.485182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:45.922 [2024-10-01 06:01:11.485193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:82304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:45.922 [2024-10-01 06:01:11.485202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:45.922 [2024-10-01 06:01:11.485214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:82432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:45.922 [2024-10-01 06:01:11.485223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:45.922 [2024-10-01 06:01:11.485234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:82560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:45.922 [2024-10-01 06:01:11.485244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:45.922 [2024-10-01 06:01:11.485255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:82688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:45.922 [2024-10-01 06:01:11.485265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:45.922 [2024-10-01 06:01:11.485291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:82816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:45.922 [2024-10-01 06:01:11.485300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:45.922 [2024-10-01 06:01:11.485311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:82944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:45.922 [2024-10-01 06:01:11.485320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:45.922 [2024-10-01 06:01:11.485331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:83072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:45.922 [2024-10-01 06:01:11.485340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:45.922 [2024-10-01 06:01:11.485351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:83200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:45.922 [2024-10-01 06:01:11.485360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:45.922 [2024-10-01 06:01:11.485372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:83328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:45.922 [2024-10-01 06:01:11.485395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:45.922 [2024-10-01 06:01:11.485406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:83456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:45.922 [2024-10-01 06:01:11.485415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:45.922 [2024-10-01 06:01:11.485427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:83584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:45.922 [2024-10-01 06:01:11.485435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:45.922 [2024-10-01 06:01:11.485451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:83712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:45.922 [2024-10-01 06:01:11.485460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:45.922 [2024-10-01 06:01:11.485470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:83840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:45.922 [2024-10-01 06:01:11.485479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:45.922 [2024-10-01 06:01:11.485489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:83968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:45.922 [2024-10-01 06:01:11.485498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:45.922 [2024-10-01 06:01:11.485509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:84096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:45.922 [2024-10-01 06:01:11.485518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:45.922 [2024-10-01 06:01:11.485528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:84224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:45.922 [2024-10-01 06:01:11.485537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:45.922 [2024-10-01 06:01:11.485548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:84352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:45.922 [2024-10-01 06:01:11.485557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:45.922 [2024-10-01 06:01:11.485567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:84480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:45.922 [2024-10-01 06:01:11.485576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:45.922 [2024-10-01 06:01:11.485587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:84608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:45.922 [2024-10-01 06:01:11.485595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:45.922 [2024-10-01 06:01:11.485606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:84736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:45.922 [2024-10-01 06:01:11.485615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:45.922 [2024-10-01 06:01:11.485625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:84864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:45.922 [2024-10-01 06:01:11.485634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:45.922 [2024-10-01 06:01:11.485644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:84992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:45.922 [2024-10-01 06:01:11.485653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:45.922 [2024-10-01 06:01:11.485664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:85120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:45.922 [2024-10-01 06:01:11.485677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:45.922 [2024-10-01 06:01:11.485688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:85248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:45.922 [2024-10-01 06:01:11.485696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:45.922 [2024-10-01 06:01:11.485707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:85376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:45.922 [2024-10-01 06:01:11.485716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:45.922 [2024-10-01 06:01:11.485727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:85504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:45.922 [2024-10-01 06:01:11.485735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:45.922 [2024-10-01 06:01:11.485746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:85632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:45.922 [2024-10-01 06:01:11.485754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:45.922 [2024-10-01 06:01:11.485767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:85760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:45.922 [2024-10-01 06:01:11.485776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:45.922 [2024-10-01 06:01:11.485786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:85888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:45.922 [2024-10-01 06:01:11.485795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:45.922 [2024-10-01 06:01:11.485806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:86016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:45.922 [2024-10-01 06:01:11.485814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:45.922 [2024-10-01 06:01:11.485825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:86144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:45.922 [2024-10-01 06:01:11.485834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:45.922 [2024-10-01 06:01:11.485844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:86272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:45.922 [2024-10-01 06:01:11.485854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:45.922 [2024-10-01 06:01:11.485865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:86400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:45.922 [2024-10-01 06:01:11.485873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:45.922 [2024-10-01 06:01:11.485884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:86528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:45.922 [2024-10-01 06:01:11.485893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:45.922 [2024-10-01 06:01:11.485903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:86656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:45.922 [2024-10-01 06:01:11.485929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:45.923 [2024-10-01 06:01:11.486253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:86784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:45.923 [2024-10-01 06:01:11.486471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:45.923 [2024-10-01 06:01:11.486599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:86912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:45.923 [2024-10-01 06:01:11.486773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:45.923 [2024-10-01 06:01:11.486941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:87040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:45.923 [2024-10-01 06:01:11.487092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:45.923 [2024-10-01 06:01:11.487113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:87168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:45.923 [2024-10-01 06:01:11.487123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:45.923 [2024-10-01 06:01:11.487135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:87296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:45.923 [2024-10-01 06:01:11.487144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:45.923 [2024-10-01 06:01:11.487156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:87424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:45.923 [2024-10-01 06:01:11.487165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:45.923 [2024-10-01 06:01:11.487177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:87552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:45.923 [2024-10-01 06:01:11.487186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:45.923 [2024-10-01 06:01:11.487197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:87680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:45.923 [2024-10-01 06:01:11.487207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:45.923 [2024-10-01 06:01:11.487222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:87808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:45.923 [2024-10-01 06:01:11.487231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:45.923 [2024-10-01 06:01:11.487243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:87936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:45.923 [2024-10-01 06:01:11.487252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:45.923 [2024-10-01 06:01:11.487264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:88064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:45.923 [2024-10-01 06:01:11.487288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:45.923 [2024-10-01 06:01:11.487300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:88192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:45.923 [2024-10-01 06:01:11.487309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:45.923 [2024-10-01 06:01:11.487320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:88320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:45.923 [2024-10-01 06:01:11.487329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:45.923 [2024-10-01 06:01:11.487340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:88448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:45.923 [2024-10-01 06:01:11.487349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:45.923 [2024-10-01 06:01:11.487360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:88576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:45.923 [2024-10-01 06:01:11.487369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:45.923 [2024-10-01 06:01:11.487381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:88704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:45.923 [2024-10-01 06:01:11.487390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:45.923 [2024-10-01 06:01:11.487401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:88832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:45.923 [2024-10-01 06:01:11.487410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:45.923 [2024-10-01 06:01:11.487421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:88960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:45.923 [2024-10-01 06:01:11.487430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:45.923 [2024-10-01 06:01:11.487441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:89088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:45.923 [2024-10-01 06:01:11.487450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:45.923 [2024-10-01 06:01:11.487461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:89216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:45.923 [2024-10-01 06:01:11.487470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:45.923 [2024-10-01 06:01:11.487481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:89344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:45.923 [2024-10-01 06:01:11.487489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:45.923 [2024-10-01 06:01:11.487501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:89472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:45.923 [2024-10-01 06:01:11.487510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:45.923 [2024-10-01 06:01:11.487521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:89600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:45.923 [2024-10-01 06:01:11.487531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:45.923 [2024-10-01 06:01:11.487542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:89728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:45.923 [2024-10-01 06:01:11.487564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:45.923 [2024-10-01 06:01:11.487595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:89856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:45.923 [2024-10-01 06:01:11.487605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:45.923 [2024-10-01 06:01:11.487616] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x787370 is same with the state(6) to be set 00:08:45.923 [2024-10-01 06:01:11.487666] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x787370 was disconnected and freed. reset controller. 00:08:45.923 06:01:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.923 06:01:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:08:45.923 [2024-10-01 06:01:11.488892] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:08:45.923 06:01:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:45.923 06:01:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:45.923 task offset: 89984 on job bdev=Nvme0n1 fails 00:08:45.923 00:08:45.923 Latency(us) 00:08:45.923 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:45.923 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:08:45.923 Job: Nvme0n1 ended in about 0.45 seconds with error 00:08:45.923 Verification LBA range: start 0x0 length 0x400 00:08:45.923 Nvme0n1 : 0.45 1433.32 89.58 143.33 0.00 39005.07 7596.22 43611.23 00:08:45.923 =================================================================================================================== 00:08:45.923 Total : 1433.32 89.58 143.33 0.00 39005.07 7596.22 43611.23 00:08:45.923 [2024-10-01 06:01:11.491307] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:45.923 [2024-10-01 06:01:11.491432] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x56f860 (9): Bad file descriptor 00:08:45.923 [2024-10-01 06:01:11.496995] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:08:45.923 [2024-10-01 06:01:11.497360] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:08:45.923 [2024-10-01 06:01:11.497568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:45.923 [2024-10-01 06:01:11.497726] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:08:45.923 [2024-10-01 06:01:11.497877] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:08:45.923 [2024-10-01 06:01:11.498056] nvme_tcp.c:2459:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:08:45.923 [2024-10-01 06:01:11.498237] nvme_tcp.c:2236:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x56f860 00:08:45.923 [2024-10-01 06:01:11.498391] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x56f860 (9): Bad file descriptor 00:08:45.923 [2024-10-01 06:01:11.498552] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:08:45.923 [2024-10-01 06:01:11.498700] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:08:45.923 [2024-10-01 06:01:11.498833] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:08:45.923 [2024-10-01 06:01:11.498969] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:08:45.923 06:01:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:45.923 06:01:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:08:47.299 06:01:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 74389 00:08:47.299 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (74389) - No such process 00:08:47.299 06:01:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:08:47.299 06:01:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:08:47.299 06:01:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:08:47.299 06:01:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:08:47.299 06:01:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # config=() 00:08:47.299 06:01:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # local subsystem config 00:08:47.299 06:01:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:08:47.299 06:01:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:08:47.299 { 00:08:47.299 "params": { 00:08:47.299 "name": "Nvme$subsystem", 00:08:47.299 "trtype": "$TEST_TRANSPORT", 00:08:47.299 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:47.299 "adrfam": "ipv4", 00:08:47.299 "trsvcid": "$NVMF_PORT", 00:08:47.299 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:47.299 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:47.299 "hdgst": ${hdgst:-false}, 00:08:47.299 "ddgst": ${ddgst:-false} 00:08:47.299 }, 00:08:47.299 "method": "bdev_nvme_attach_controller" 00:08:47.299 } 00:08:47.299 EOF 00:08:47.299 )") 00:08:47.299 06:01:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@578 -- # cat 00:08:47.299 06:01:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@580 -- # jq . 00:08:47.299 06:01:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@581 -- # IFS=, 00:08:47.299 06:01:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:08:47.299 "params": { 00:08:47.299 "name": "Nvme0", 00:08:47.299 "trtype": "tcp", 00:08:47.299 "traddr": "10.0.0.3", 00:08:47.299 "adrfam": "ipv4", 00:08:47.299 "trsvcid": "4420", 00:08:47.299 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:47.299 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:08:47.299 "hdgst": false, 00:08:47.299 "ddgst": false 00:08:47.299 }, 00:08:47.299 "method": "bdev_nvme_attach_controller" 00:08:47.299 }' 00:08:47.299 [2024-10-01 06:01:12.572963] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:08:47.299 [2024-10-01 06:01:12.573077] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74429 ] 00:08:47.299 [2024-10-01 06:01:12.718288] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:47.299 [2024-10-01 06:01:12.761185] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:47.299 [2024-10-01 06:01:12.803437] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:47.299 Running I/O for 1 seconds... 00:08:48.676 1408.00 IOPS, 88.00 MiB/s 00:08:48.676 Latency(us) 00:08:48.676 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:48.676 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:08:48.676 Verification LBA range: start 0x0 length 0x400 00:08:48.676 Nvme0n1 : 1.01 1463.61 91.48 0.00 0.00 42745.09 3991.74 42419.67 00:08:48.676 =================================================================================================================== 00:08:48.676 Total : 1463.61 91.48 0.00 0.00 42745.09 3991.74 42419.67 00:08:48.676 06:01:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:08:48.676 06:01:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:08:48.676 06:01:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:08:48.676 06:01:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:08:48.676 06:01:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:08:48.676 06:01:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # nvmfcleanup 00:08:48.676 06:01:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:08:48.676 06:01:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:48.676 06:01:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:08:48.676 06:01:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:48.676 06:01:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:48.676 rmmod nvme_tcp 00:08:48.676 rmmod nvme_fabrics 00:08:48.676 rmmod nvme_keyring 00:08:48.676 06:01:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:48.676 06:01:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:08:48.676 06:01:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:08:48.676 06:01:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@513 -- # '[' -n 74348 ']' 00:08:48.676 06:01:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@514 -- # killprocess 74348 00:08:48.676 06:01:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@950 -- # '[' -z 74348 ']' 00:08:48.676 06:01:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # kill -0 74348 00:08:48.676 06:01:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # uname 00:08:48.676 06:01:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:48.676 06:01:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74348 00:08:48.676 killing process with pid 74348 00:08:48.676 06:01:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:08:48.676 06:01:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:08:48.676 06:01:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74348' 00:08:48.676 06:01:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@969 -- # kill 74348 00:08:48.676 06:01:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@974 -- # wait 74348 00:08:48.935 [2024-10-01 06:01:14.339068] app.c: 719:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:08:48.935 06:01:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:08:48.935 06:01:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:08:48.935 06:01:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:08:48.935 06:01:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:08:48.935 06:01:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@787 -- # iptables-save 00:08:48.935 06:01:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:08:48.935 06:01:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@787 -- # iptables-restore 00:08:48.935 06:01:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:48.935 06:01:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:08:48.935 06:01:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:08:48.935 06:01:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:08:48.935 06:01:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:08:48.935 06:01:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:08:48.935 06:01:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:08:48.935 06:01:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:08:48.935 06:01:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:08:48.935 06:01:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:08:48.935 06:01:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:08:48.935 06:01:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:08:48.935 06:01:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:08:48.935 06:01:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:48.935 06:01:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:49.193 06:01:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@246 -- # remove_spdk_ns 00:08:49.193 06:01:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:49.193 06:01:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:49.193 06:01:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:49.193 06:01:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@300 -- # return 0 00:08:49.194 06:01:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:08:49.194 00:08:49.194 real 0m5.234s 00:08:49.194 user 0m18.150s 00:08:49.194 sys 0m1.410s 00:08:49.194 06:01:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:49.194 06:01:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:49.194 ************************************ 00:08:49.194 END TEST nvmf_host_management 00:08:49.194 ************************************ 00:08:49.194 06:01:14 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:08:49.194 06:01:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:49.194 06:01:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:49.194 06:01:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:49.194 ************************************ 00:08:49.194 START TEST nvmf_lvol 00:08:49.194 ************************************ 00:08:49.194 06:01:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:08:49.194 * Looking for test storage... 00:08:49.194 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:49.194 06:01:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:49.194 06:01:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1681 -- # lcov --version 00:08:49.194 06:01:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:49.453 06:01:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:49.453 06:01:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:49.453 06:01:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:49.454 06:01:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:49.454 06:01:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:08:49.454 06:01:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:08:49.454 06:01:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:08:49.454 06:01:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:08:49.454 06:01:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:08:49.454 06:01:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:08:49.454 06:01:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:08:49.454 06:01:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:49.454 06:01:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:08:49.454 06:01:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:08:49.454 06:01:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:49.454 06:01:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:49.454 06:01:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:08:49.454 06:01:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:08:49.454 06:01:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:49.454 06:01:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:08:49.454 06:01:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:08:49.454 06:01:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:08:49.454 06:01:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:08:49.454 06:01:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:49.454 06:01:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:08:49.454 06:01:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:08:49.454 06:01:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:49.454 06:01:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:49.454 06:01:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:08:49.454 06:01:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:49.454 06:01:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:49.454 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:49.454 --rc genhtml_branch_coverage=1 00:08:49.454 --rc genhtml_function_coverage=1 00:08:49.454 --rc genhtml_legend=1 00:08:49.454 --rc geninfo_all_blocks=1 00:08:49.454 --rc geninfo_unexecuted_blocks=1 00:08:49.454 00:08:49.454 ' 00:08:49.454 06:01:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:49.454 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:49.454 --rc genhtml_branch_coverage=1 00:08:49.454 --rc genhtml_function_coverage=1 00:08:49.454 --rc genhtml_legend=1 00:08:49.454 --rc geninfo_all_blocks=1 00:08:49.454 --rc geninfo_unexecuted_blocks=1 00:08:49.454 00:08:49.454 ' 00:08:49.454 06:01:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:49.454 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:49.454 --rc genhtml_branch_coverage=1 00:08:49.454 --rc genhtml_function_coverage=1 00:08:49.454 --rc genhtml_legend=1 00:08:49.454 --rc geninfo_all_blocks=1 00:08:49.454 --rc geninfo_unexecuted_blocks=1 00:08:49.454 00:08:49.454 ' 00:08:49.454 06:01:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:49.454 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:49.454 --rc genhtml_branch_coverage=1 00:08:49.454 --rc genhtml_function_coverage=1 00:08:49.454 --rc genhtml_legend=1 00:08:49.454 --rc geninfo_all_blocks=1 00:08:49.454 --rc geninfo_unexecuted_blocks=1 00:08:49.454 00:08:49.454 ' 00:08:49.454 06:01:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:49.454 06:01:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:08:49.454 06:01:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:49.454 06:01:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:49.454 06:01:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:49.454 06:01:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:49.454 06:01:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:49.454 06:01:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:49.454 06:01:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:49.454 06:01:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:49.454 06:01:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:49.454 06:01:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:49.454 06:01:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 00:08:49.454 06:01:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=a979a798-a221-4879-b3c4-5aaa753fde06 00:08:49.454 06:01:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:49.454 06:01:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:49.454 06:01:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:49.454 06:01:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:49.454 06:01:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:49.454 06:01:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:08:49.454 06:01:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:49.454 06:01:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:49.454 06:01:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:49.454 06:01:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:49.454 06:01:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:49.454 06:01:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:49.454 06:01:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:08:49.454 06:01:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:49.454 06:01:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:08:49.454 06:01:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:49.454 06:01:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:49.454 06:01:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:49.454 06:01:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:49.454 06:01:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:49.454 06:01:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:49.454 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:49.454 06:01:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:49.454 06:01:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:49.454 06:01:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:49.454 06:01:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:49.454 06:01:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:49.454 06:01:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:08:49.454 06:01:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:08:49.454 06:01:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:49.454 06:01:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:08:49.454 06:01:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:08:49.454 06:01:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:49.454 06:01:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@472 -- # prepare_net_devs 00:08:49.455 06:01:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@434 -- # local -g is_hw=no 00:08:49.455 06:01:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@436 -- # remove_spdk_ns 00:08:49.455 06:01:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:49.455 06:01:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:49.455 06:01:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:49.455 06:01:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:08:49.455 06:01:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:08:49.455 06:01:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:08:49.455 06:01:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:08:49.455 06:01:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:08:49.455 06:01:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@456 -- # nvmf_veth_init 00:08:49.455 06:01:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:49.455 06:01:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:08:49.455 06:01:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:08:49.455 06:01:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:08:49.455 06:01:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:49.455 06:01:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:08:49.455 06:01:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:49.455 06:01:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:08:49.455 06:01:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:49.455 06:01:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:08:49.455 06:01:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:49.455 06:01:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:49.455 06:01:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:49.455 06:01:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:49.455 06:01:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:49.455 06:01:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:49.455 06:01:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:08:49.455 Cannot find device "nvmf_init_br" 00:08:49.455 06:01:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@162 -- # true 00:08:49.455 06:01:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:08:49.455 Cannot find device "nvmf_init_br2" 00:08:49.455 06:01:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@163 -- # true 00:08:49.455 06:01:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:08:49.455 Cannot find device "nvmf_tgt_br" 00:08:49.455 06:01:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@164 -- # true 00:08:49.455 06:01:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:08:49.455 Cannot find device "nvmf_tgt_br2" 00:08:49.455 06:01:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@165 -- # true 00:08:49.455 06:01:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:08:49.455 Cannot find device "nvmf_init_br" 00:08:49.455 06:01:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@166 -- # true 00:08:49.455 06:01:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:08:49.455 Cannot find device "nvmf_init_br2" 00:08:49.455 06:01:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@167 -- # true 00:08:49.455 06:01:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:08:49.455 Cannot find device "nvmf_tgt_br" 00:08:49.455 06:01:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@168 -- # true 00:08:49.455 06:01:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:08:49.455 Cannot find device "nvmf_tgt_br2" 00:08:49.455 06:01:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@169 -- # true 00:08:49.455 06:01:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:08:49.455 Cannot find device "nvmf_br" 00:08:49.455 06:01:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@170 -- # true 00:08:49.455 06:01:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:08:49.455 Cannot find device "nvmf_init_if" 00:08:49.455 06:01:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@171 -- # true 00:08:49.455 06:01:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:08:49.455 Cannot find device "nvmf_init_if2" 00:08:49.455 06:01:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@172 -- # true 00:08:49.455 06:01:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:49.455 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:49.455 06:01:14 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@173 -- # true 00:08:49.455 06:01:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:49.455 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:49.455 06:01:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@174 -- # true 00:08:49.455 06:01:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:08:49.455 06:01:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:49.455 06:01:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:08:49.455 06:01:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:49.455 06:01:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:49.455 06:01:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:49.714 06:01:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:49.714 06:01:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:49.714 06:01:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:08:49.714 06:01:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:08:49.714 06:01:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:08:49.714 06:01:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:08:49.714 06:01:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:08:49.714 06:01:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:08:49.714 06:01:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:08:49.714 06:01:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:08:49.714 06:01:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:08:49.714 06:01:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:49.714 06:01:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:49.714 06:01:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:49.714 06:01:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:08:49.714 06:01:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:08:49.714 06:01:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:08:49.714 06:01:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:08:49.714 06:01:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:49.714 06:01:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:49.714 06:01:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:49.714 06:01:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:08:49.714 06:01:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:08:49.714 06:01:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:08:49.714 06:01:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:49.714 06:01:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:08:49.714 06:01:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:08:49.714 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:49.714 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.086 ms 00:08:49.714 00:08:49.714 --- 10.0.0.3 ping statistics --- 00:08:49.714 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:49.714 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:08:49.714 06:01:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:08:49.714 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:08:49.714 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.066 ms 00:08:49.714 00:08:49.714 --- 10.0.0.4 ping statistics --- 00:08:49.714 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:49.714 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:08:49.714 06:01:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:49.714 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:49.714 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:08:49.714 00:08:49.714 --- 10.0.0.1 ping statistics --- 00:08:49.714 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:49.714 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:08:49.714 06:01:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:08:49.714 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:49.714 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.042 ms 00:08:49.714 00:08:49.714 --- 10.0.0.2 ping statistics --- 00:08:49.714 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:49.714 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:08:49.714 06:01:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:49.714 06:01:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@457 -- # return 0 00:08:49.714 06:01:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:08:49.714 06:01:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:49.714 06:01:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:08:49.715 06:01:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:08:49.715 06:01:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:49.715 06:01:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:08:49.715 06:01:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:08:49.715 06:01:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:08:49.715 06:01:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:08:49.715 06:01:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:49.715 06:01:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:49.715 06:01:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@505 -- # nvmfpid=74698 00:08:49.715 06:01:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@506 -- # waitforlisten 74698 00:08:49.715 06:01:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@831 -- # '[' -z 74698 ']' 00:08:49.715 06:01:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:08:49.715 06:01:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:49.715 06:01:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:49.715 06:01:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:49.715 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:49.715 06:01:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:49.715 06:01:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:49.715 [2024-10-01 06:01:15.306730] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:08:49.715 [2024-10-01 06:01:15.307332] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:49.973 [2024-10-01 06:01:15.447391] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:49.973 [2024-10-01 06:01:15.487873] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:49.973 [2024-10-01 06:01:15.487946] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:49.973 [2024-10-01 06:01:15.487960] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:49.973 [2024-10-01 06:01:15.487970] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:49.973 [2024-10-01 06:01:15.487990] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:49.973 [2024-10-01 06:01:15.488704] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:08:49.973 [2024-10-01 06:01:15.488882] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:08:49.973 [2024-10-01 06:01:15.488891] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:08:49.973 [2024-10-01 06:01:15.522838] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:49.973 06:01:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:49.973 06:01:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # return 0 00:08:49.973 06:01:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:08:49.973 06:01:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:49.973 06:01:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:50.261 06:01:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:50.262 06:01:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:50.534 [2024-10-01 06:01:15.914244] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:50.534 06:01:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:50.793 06:01:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:08:50.793 06:01:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:51.051 06:01:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:08:51.051 06:01:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:08:51.309 06:01:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:08:51.568 06:01:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=6d93cb42-a36d-42e1-81a0-cdf85d3eb69d 00:08:51.568 06:01:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 6d93cb42-a36d-42e1-81a0-cdf85d3eb69d lvol 20 00:08:51.826 06:01:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=5994867c-68f7-423f-8dd9-a7620380b8d8 00:08:51.826 06:01:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:52.393 06:01:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 5994867c-68f7-423f-8dd9-a7620380b8d8 00:08:52.652 06:01:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:08:52.910 [2024-10-01 06:01:18.291254] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:08:52.910 06:01:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:08:53.168 06:01:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=74766 00:08:53.168 06:01:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:08:53.168 06:01:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:08:54.101 06:01:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot 5994867c-68f7-423f-8dd9-a7620380b8d8 MY_SNAPSHOT 00:08:54.359 06:01:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=7179f31f-a13b-4801-8597-38decb78ed5b 00:08:54.359 06:01:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize 5994867c-68f7-423f-8dd9-a7620380b8d8 30 00:08:54.924 06:01:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone 7179f31f-a13b-4801-8597-38decb78ed5b MY_CLONE 00:08:55.181 06:01:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=0aeaa02a-629f-4768-b51a-e3df436d795e 00:08:55.181 06:01:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate 0aeaa02a-629f-4768-b51a-e3df436d795e 00:08:55.747 06:01:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 74766 00:09:03.864 Initializing NVMe Controllers 00:09:03.864 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode0 00:09:03.864 Controller IO queue size 128, less than required. 00:09:03.864 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:03.864 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:09:03.864 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:09:03.864 Initialization complete. Launching workers. 00:09:03.864 ======================================================== 00:09:03.864 Latency(us) 00:09:03.864 Device Information : IOPS MiB/s Average min max 00:09:03.864 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10409.85 40.66 12304.13 1590.80 67694.55 00:09:03.864 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10341.55 40.40 12377.48 3311.57 51009.85 00:09:03.864 ======================================================== 00:09:03.864 Total : 20751.40 81.06 12340.68 1590.80 67694.55 00:09:03.864 00:09:03.864 06:01:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:03.864 06:01:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 5994867c-68f7-423f-8dd9-a7620380b8d8 00:09:04.122 06:01:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 6d93cb42-a36d-42e1-81a0-cdf85d3eb69d 00:09:04.122 06:01:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:09:04.122 06:01:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:09:04.122 06:01:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:09:04.122 06:01:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # nvmfcleanup 00:09:04.122 06:01:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:09:04.381 06:01:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:04.381 06:01:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:09:04.381 06:01:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:04.381 06:01:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:04.381 rmmod nvme_tcp 00:09:04.381 rmmod nvme_fabrics 00:09:04.381 rmmod nvme_keyring 00:09:04.381 06:01:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:04.381 06:01:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:09:04.381 06:01:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:09:04.381 06:01:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@513 -- # '[' -n 74698 ']' 00:09:04.381 06:01:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@514 -- # killprocess 74698 00:09:04.381 06:01:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@950 -- # '[' -z 74698 ']' 00:09:04.381 06:01:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # kill -0 74698 00:09:04.381 06:01:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # uname 00:09:04.381 06:01:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:04.381 06:01:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74698 00:09:04.381 killing process with pid 74698 00:09:04.381 06:01:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:04.381 06:01:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:04.381 06:01:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74698' 00:09:04.381 06:01:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@969 -- # kill 74698 00:09:04.381 06:01:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@974 -- # wait 74698 00:09:04.639 06:01:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:09:04.639 06:01:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:09:04.639 06:01:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:09:04.639 06:01:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:09:04.639 06:01:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@787 -- # iptables-save 00:09:04.639 06:01:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:09:04.639 06:01:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@787 -- # iptables-restore 00:09:04.639 06:01:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:04.639 06:01:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:09:04.639 06:01:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:09:04.639 06:01:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:09:04.639 06:01:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:09:04.639 06:01:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:09:04.639 06:01:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:09:04.639 06:01:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:09:04.639 06:01:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:09:04.639 06:01:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:09:04.639 06:01:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:09:04.639 06:01:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:09:04.639 06:01:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:09:04.639 06:01:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:04.639 06:01:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:04.639 06:01:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@246 -- # remove_spdk_ns 00:09:04.639 06:01:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:04.639 06:01:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:04.639 06:01:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:04.639 06:01:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@300 -- # return 0 00:09:04.639 00:09:04.639 real 0m15.596s 00:09:04.639 user 1m4.800s 00:09:04.639 sys 0m4.218s 00:09:04.639 06:01:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:04.639 06:01:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:04.639 ************************************ 00:09:04.639 END TEST nvmf_lvol 00:09:04.639 ************************************ 00:09:04.899 06:01:30 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:09:04.899 06:01:30 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:04.899 06:01:30 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:04.899 06:01:30 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:04.899 ************************************ 00:09:04.899 START TEST nvmf_lvs_grow 00:09:04.899 ************************************ 00:09:04.899 06:01:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:09:04.899 * Looking for test storage... 00:09:04.899 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:04.899 06:01:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:04.899 06:01:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1681 -- # lcov --version 00:09:04.899 06:01:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:04.899 06:01:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:04.899 06:01:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:04.899 06:01:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:04.899 06:01:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:04.899 06:01:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:09:04.899 06:01:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:09:04.899 06:01:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:09:04.899 06:01:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:09:04.899 06:01:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:09:04.899 06:01:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:09:04.899 06:01:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:09:04.899 06:01:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:04.899 06:01:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:09:04.899 06:01:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:09:04.899 06:01:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:04.899 06:01:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:04.899 06:01:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:09:04.899 06:01:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:09:04.899 06:01:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:04.899 06:01:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:09:04.899 06:01:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:09:04.899 06:01:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:09:04.899 06:01:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:09:04.899 06:01:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:04.899 06:01:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:09:04.899 06:01:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:09:04.899 06:01:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:04.899 06:01:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:04.899 06:01:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:09:04.899 06:01:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:04.899 06:01:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:04.899 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:04.899 --rc genhtml_branch_coverage=1 00:09:04.899 --rc genhtml_function_coverage=1 00:09:04.899 --rc genhtml_legend=1 00:09:04.900 --rc geninfo_all_blocks=1 00:09:04.900 --rc geninfo_unexecuted_blocks=1 00:09:04.900 00:09:04.900 ' 00:09:04.900 06:01:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:04.900 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:04.900 --rc genhtml_branch_coverage=1 00:09:04.900 --rc genhtml_function_coverage=1 00:09:04.900 --rc genhtml_legend=1 00:09:04.900 --rc geninfo_all_blocks=1 00:09:04.900 --rc geninfo_unexecuted_blocks=1 00:09:04.900 00:09:04.900 ' 00:09:04.900 06:01:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:04.900 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:04.900 --rc genhtml_branch_coverage=1 00:09:04.900 --rc genhtml_function_coverage=1 00:09:04.900 --rc genhtml_legend=1 00:09:04.900 --rc geninfo_all_blocks=1 00:09:04.900 --rc geninfo_unexecuted_blocks=1 00:09:04.900 00:09:04.900 ' 00:09:04.900 06:01:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:04.900 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:04.900 --rc genhtml_branch_coverage=1 00:09:04.900 --rc genhtml_function_coverage=1 00:09:04.900 --rc genhtml_legend=1 00:09:04.900 --rc geninfo_all_blocks=1 00:09:04.900 --rc geninfo_unexecuted_blocks=1 00:09:04.900 00:09:04.900 ' 00:09:04.900 06:01:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:04.900 06:01:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:09:04.900 06:01:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:04.900 06:01:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:04.900 06:01:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:04.900 06:01:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:04.900 06:01:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:04.900 06:01:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:04.900 06:01:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:04.900 06:01:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:04.900 06:01:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:04.900 06:01:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:04.900 06:01:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 00:09:04.900 06:01:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=a979a798-a221-4879-b3c4-5aaa753fde06 00:09:04.900 06:01:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:04.900 06:01:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:04.900 06:01:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:04.900 06:01:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:04.900 06:01:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:04.900 06:01:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:09:04.900 06:01:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:04.900 06:01:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:04.900 06:01:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:04.900 06:01:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:04.900 06:01:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:04.900 06:01:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:04.900 06:01:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:09:04.900 06:01:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:04.900 06:01:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:09:04.900 06:01:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:04.900 06:01:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:04.900 06:01:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:04.900 06:01:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:04.900 06:01:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:04.900 06:01:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:04.900 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:04.900 06:01:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:04.900 06:01:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:04.900 06:01:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:04.900 06:01:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:04.900 06:01:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:09:04.900 06:01:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:09:04.900 06:01:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:09:04.900 06:01:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:04.900 06:01:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@472 -- # prepare_net_devs 00:09:04.900 06:01:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@434 -- # local -g is_hw=no 00:09:04.900 06:01:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@436 -- # remove_spdk_ns 00:09:04.900 06:01:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:04.900 06:01:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:04.900 06:01:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:04.900 06:01:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:09:04.900 06:01:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:09:04.900 06:01:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:09:04.900 06:01:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:09:04.900 06:01:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:09:04.900 06:01:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@456 -- # nvmf_veth_init 00:09:04.900 06:01:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:04.900 06:01:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:09:04.900 06:01:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:09:04.900 06:01:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:09:04.900 06:01:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:04.900 06:01:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:09:04.900 06:01:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:04.900 06:01:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:09:04.900 06:01:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:04.900 06:01:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:09:04.900 06:01:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:04.900 06:01:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:04.900 06:01:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:04.900 06:01:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:04.900 06:01:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:04.900 06:01:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:04.900 06:01:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:09:05.159 Cannot find device "nvmf_init_br" 00:09:05.159 06:01:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@162 -- # true 00:09:05.159 06:01:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:09:05.159 Cannot find device "nvmf_init_br2" 00:09:05.159 06:01:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@163 -- # true 00:09:05.159 06:01:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:09:05.159 Cannot find device "nvmf_tgt_br" 00:09:05.159 06:01:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@164 -- # true 00:09:05.159 06:01:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:09:05.159 Cannot find device "nvmf_tgt_br2" 00:09:05.159 06:01:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@165 -- # true 00:09:05.159 06:01:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:09:05.159 Cannot find device "nvmf_init_br" 00:09:05.159 06:01:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@166 -- # true 00:09:05.159 06:01:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:09:05.159 Cannot find device "nvmf_init_br2" 00:09:05.159 06:01:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@167 -- # true 00:09:05.159 06:01:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:09:05.159 Cannot find device "nvmf_tgt_br" 00:09:05.159 06:01:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@168 -- # true 00:09:05.159 06:01:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:09:05.159 Cannot find device "nvmf_tgt_br2" 00:09:05.159 06:01:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@169 -- # true 00:09:05.159 06:01:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:09:05.159 Cannot find device "nvmf_br" 00:09:05.159 06:01:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@170 -- # true 00:09:05.159 06:01:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:09:05.159 Cannot find device "nvmf_init_if" 00:09:05.159 06:01:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@171 -- # true 00:09:05.159 06:01:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:09:05.159 Cannot find device "nvmf_init_if2" 00:09:05.159 06:01:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@172 -- # true 00:09:05.159 06:01:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:05.159 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:05.159 06:01:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@173 -- # true 00:09:05.159 06:01:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:05.159 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:05.159 06:01:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@174 -- # true 00:09:05.159 06:01:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:09:05.159 06:01:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:05.159 06:01:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:09:05.159 06:01:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:05.159 06:01:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:05.159 06:01:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:05.159 06:01:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:05.159 06:01:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:05.159 06:01:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:09:05.159 06:01:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:09:05.159 06:01:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:09:05.159 06:01:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:09:05.418 06:01:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:09:05.418 06:01:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:09:05.418 06:01:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:09:05.418 06:01:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:09:05.418 06:01:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:09:05.418 06:01:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:05.418 06:01:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:05.418 06:01:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:05.418 06:01:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:09:05.418 06:01:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:09:05.418 06:01:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:09:05.418 06:01:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:09:05.418 06:01:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:05.419 06:01:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:05.419 06:01:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:05.419 06:01:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:09:05.419 06:01:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:09:05.419 06:01:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:09:05.419 06:01:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:05.419 06:01:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:09:05.419 06:01:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:09:05.419 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:05.419 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.077 ms 00:09:05.419 00:09:05.419 --- 10.0.0.3 ping statistics --- 00:09:05.419 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:05.419 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:09:05.419 06:01:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:09:05.419 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:09:05.419 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.047 ms 00:09:05.419 00:09:05.419 --- 10.0.0.4 ping statistics --- 00:09:05.419 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:05.419 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:09:05.419 06:01:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:05.419 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:05.419 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:09:05.419 00:09:05.419 --- 10.0.0.1 ping statistics --- 00:09:05.419 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:05.419 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:09:05.419 06:01:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:09:05.419 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:05.419 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.060 ms 00:09:05.419 00:09:05.419 --- 10.0.0.2 ping statistics --- 00:09:05.419 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:05.419 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:09:05.419 06:01:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:05.419 06:01:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@457 -- # return 0 00:09:05.419 06:01:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:09:05.419 06:01:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:05.419 06:01:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:09:05.419 06:01:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:09:05.419 06:01:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:05.419 06:01:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:09:05.419 06:01:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:09:05.419 06:01:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:09:05.419 06:01:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:09:05.419 06:01:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:05.419 06:01:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:05.419 06:01:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@505 -- # nvmfpid=75143 00:09:05.419 06:01:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:09:05.419 06:01:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@506 -- # waitforlisten 75143 00:09:05.419 06:01:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@831 -- # '[' -z 75143 ']' 00:09:05.419 06:01:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:05.419 06:01:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:05.419 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:05.419 06:01:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:05.419 06:01:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:05.419 06:01:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:05.419 [2024-10-01 06:01:30.976741] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:09:05.419 [2024-10-01 06:01:30.976851] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:05.678 [2024-10-01 06:01:31.115748] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:05.678 [2024-10-01 06:01:31.146748] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:05.678 [2024-10-01 06:01:31.146812] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:05.678 [2024-10-01 06:01:31.146838] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:05.678 [2024-10-01 06:01:31.146845] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:05.678 [2024-10-01 06:01:31.146851] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:05.678 [2024-10-01 06:01:31.146873] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:05.678 [2024-10-01 06:01:31.173436] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:05.678 06:01:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:05.678 06:01:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # return 0 00:09:05.678 06:01:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:09:05.678 06:01:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:05.678 06:01:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:05.678 06:01:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:05.678 06:01:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:06.245 [2024-10-01 06:01:31.580394] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:06.245 06:01:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:09:06.245 06:01:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:06.245 06:01:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:06.245 06:01:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:06.245 ************************************ 00:09:06.245 START TEST lvs_grow_clean 00:09:06.245 ************************************ 00:09:06.245 06:01:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1125 -- # lvs_grow 00:09:06.245 06:01:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:09:06.245 06:01:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:09:06.245 06:01:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:09:06.245 06:01:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:09:06.245 06:01:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:09:06.245 06:01:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:09:06.245 06:01:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:06.245 06:01:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:06.245 06:01:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:06.503 06:01:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:09:06.503 06:01:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:09:06.761 06:01:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=8565a062-790d-4899-8f15-9d00b64b0887 00:09:06.761 06:01:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8565a062-790d-4899-8f15-9d00b64b0887 00:09:06.761 06:01:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:09:07.019 06:01:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:09:07.019 06:01:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:09:07.019 06:01:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 8565a062-790d-4899-8f15-9d00b64b0887 lvol 150 00:09:07.278 06:01:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=20e47be6-f4a6-4240-a848-569945f81b70 00:09:07.278 06:01:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:07.278 06:01:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:09:07.536 [2024-10-01 06:01:32.969556] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:09:07.536 [2024-10-01 06:01:32.969646] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:09:07.536 true 00:09:07.536 06:01:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8565a062-790d-4899-8f15-9d00b64b0887 00:09:07.536 06:01:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:09:07.794 06:01:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:09:07.794 06:01:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:08.053 06:01:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 20e47be6-f4a6-4240-a848-569945f81b70 00:09:08.312 06:01:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:09:08.571 [2024-10-01 06:01:34.034143] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:09:08.571 06:01:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:09:08.830 06:01:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=75219 00:09:08.830 06:01:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:09:08.830 06:01:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:08.830 06:01:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 75219 /var/tmp/bdevperf.sock 00:09:08.830 06:01:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@831 -- # '[' -z 75219 ']' 00:09:08.830 06:01:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:08.830 06:01:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:08.830 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:08.830 06:01:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:08.830 06:01:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:08.830 06:01:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:09:08.830 [2024-10-01 06:01:34.346117] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:09:08.830 [2024-10-01 06:01:34.346223] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75219 ] 00:09:09.088 [2024-10-01 06:01:34.480083] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:09.088 [2024-10-01 06:01:34.514141] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:09:09.088 [2024-10-01 06:01:34.542021] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:09.088 06:01:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:09.088 06:01:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # return 0 00:09:09.088 06:01:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:09:09.346 Nvme0n1 00:09:09.346 06:01:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:09:09.605 [ 00:09:09.605 { 00:09:09.605 "name": "Nvme0n1", 00:09:09.605 "aliases": [ 00:09:09.605 "20e47be6-f4a6-4240-a848-569945f81b70" 00:09:09.605 ], 00:09:09.605 "product_name": "NVMe disk", 00:09:09.605 "block_size": 4096, 00:09:09.605 "num_blocks": 38912, 00:09:09.605 "uuid": "20e47be6-f4a6-4240-a848-569945f81b70", 00:09:09.605 "numa_id": -1, 00:09:09.605 "assigned_rate_limits": { 00:09:09.605 "rw_ios_per_sec": 0, 00:09:09.605 "rw_mbytes_per_sec": 0, 00:09:09.605 "r_mbytes_per_sec": 0, 00:09:09.605 "w_mbytes_per_sec": 0 00:09:09.605 }, 00:09:09.605 "claimed": false, 00:09:09.605 "zoned": false, 00:09:09.605 "supported_io_types": { 00:09:09.605 "read": true, 00:09:09.605 "write": true, 00:09:09.605 "unmap": true, 00:09:09.605 "flush": true, 00:09:09.605 "reset": true, 00:09:09.605 "nvme_admin": true, 00:09:09.605 "nvme_io": true, 00:09:09.605 "nvme_io_md": false, 00:09:09.605 "write_zeroes": true, 00:09:09.605 "zcopy": false, 00:09:09.605 "get_zone_info": false, 00:09:09.605 "zone_management": false, 00:09:09.605 "zone_append": false, 00:09:09.605 "compare": true, 00:09:09.605 "compare_and_write": true, 00:09:09.605 "abort": true, 00:09:09.605 "seek_hole": false, 00:09:09.605 "seek_data": false, 00:09:09.605 "copy": true, 00:09:09.605 "nvme_iov_md": false 00:09:09.605 }, 00:09:09.605 "memory_domains": [ 00:09:09.605 { 00:09:09.605 "dma_device_id": "system", 00:09:09.605 "dma_device_type": 1 00:09:09.605 } 00:09:09.605 ], 00:09:09.605 "driver_specific": { 00:09:09.605 "nvme": [ 00:09:09.605 { 00:09:09.605 "trid": { 00:09:09.605 "trtype": "TCP", 00:09:09.605 "adrfam": "IPv4", 00:09:09.605 "traddr": "10.0.0.3", 00:09:09.605 "trsvcid": "4420", 00:09:09.605 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:09:09.605 }, 00:09:09.605 "ctrlr_data": { 00:09:09.605 "cntlid": 1, 00:09:09.605 "vendor_id": "0x8086", 00:09:09.605 "model_number": "SPDK bdev Controller", 00:09:09.605 "serial_number": "SPDK0", 00:09:09.605 "firmware_revision": "25.01", 00:09:09.605 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:09.605 "oacs": { 00:09:09.605 "security": 0, 00:09:09.605 "format": 0, 00:09:09.605 "firmware": 0, 00:09:09.605 "ns_manage": 0 00:09:09.605 }, 00:09:09.605 "multi_ctrlr": true, 00:09:09.605 "ana_reporting": false 00:09:09.605 }, 00:09:09.605 "vs": { 00:09:09.605 "nvme_version": "1.3" 00:09:09.605 }, 00:09:09.605 "ns_data": { 00:09:09.605 "id": 1, 00:09:09.605 "can_share": true 00:09:09.605 } 00:09:09.605 } 00:09:09.605 ], 00:09:09.605 "mp_policy": "active_passive" 00:09:09.605 } 00:09:09.605 } 00:09:09.605 ] 00:09:09.605 06:01:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=75230 00:09:09.605 06:01:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:09.605 06:01:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:09:09.863 Running I/O for 10 seconds... 00:09:10.798 Latency(us) 00:09:10.798 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:10.798 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:10.798 Nvme0n1 : 1.00 6731.00 26.29 0.00 0.00 0.00 0.00 0.00 00:09:10.798 =================================================================================================================== 00:09:10.798 Total : 6731.00 26.29 0.00 0.00 0.00 0.00 0.00 00:09:10.798 00:09:11.732 06:01:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 8565a062-790d-4899-8f15-9d00b64b0887 00:09:11.732 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:11.732 Nvme0n1 : 2.00 6731.00 26.29 0.00 0.00 0.00 0.00 0.00 00:09:11.732 =================================================================================================================== 00:09:11.732 Total : 6731.00 26.29 0.00 0.00 0.00 0.00 0.00 00:09:11.732 00:09:11.991 true 00:09:11.991 06:01:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8565a062-790d-4899-8f15-9d00b64b0887 00:09:11.991 06:01:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:09:12.250 06:01:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:09:12.250 06:01:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:09:12.250 06:01:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 75230 00:09:12.816 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:12.816 Nvme0n1 : 3.00 6688.67 26.13 0.00 0.00 0.00 0.00 0.00 00:09:12.816 =================================================================================================================== 00:09:12.816 Total : 6688.67 26.13 0.00 0.00 0.00 0.00 0.00 00:09:12.816 00:09:13.752 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:13.752 Nvme0n1 : 4.00 6699.25 26.17 0.00 0.00 0.00 0.00 0.00 00:09:13.752 =================================================================================================================== 00:09:13.752 Total : 6699.25 26.17 0.00 0.00 0.00 0.00 0.00 00:09:13.752 00:09:14.688 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:14.688 Nvme0n1 : 5.00 6654.80 26.00 0.00 0.00 0.00 0.00 0.00 00:09:14.688 =================================================================================================================== 00:09:14.688 Total : 6654.80 26.00 0.00 0.00 0.00 0.00 0.00 00:09:14.688 00:09:15.660 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:15.660 Nvme0n1 : 6.00 6625.17 25.88 0.00 0.00 0.00 0.00 0.00 00:09:15.660 =================================================================================================================== 00:09:15.660 Total : 6625.17 25.88 0.00 0.00 0.00 0.00 0.00 00:09:15.660 00:09:17.035 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:17.035 Nvme0n1 : 7.00 6567.71 25.66 0.00 0.00 0.00 0.00 0.00 00:09:17.035 =================================================================================================================== 00:09:17.035 Total : 6567.71 25.66 0.00 0.00 0.00 0.00 0.00 00:09:17.035 00:09:17.970 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:17.970 Nvme0n1 : 8.00 6524.62 25.49 0.00 0.00 0.00 0.00 0.00 00:09:17.970 =================================================================================================================== 00:09:17.970 Total : 6524.62 25.49 0.00 0.00 0.00 0.00 0.00 00:09:17.970 00:09:18.904 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:18.904 Nvme0n1 : 9.00 6505.22 25.41 0.00 0.00 0.00 0.00 0.00 00:09:18.904 =================================================================================================================== 00:09:18.904 Total : 6505.22 25.41 0.00 0.00 0.00 0.00 0.00 00:09:18.904 00:09:19.838 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:19.838 Nvme0n1 : 10.00 6502.40 25.40 0.00 0.00 0.00 0.00 0.00 00:09:19.838 =================================================================================================================== 00:09:19.838 Total : 6502.40 25.40 0.00 0.00 0.00 0.00 0.00 00:09:19.838 00:09:19.838 00:09:19.838 Latency(us) 00:09:19.838 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:19.838 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:19.838 Nvme0n1 : 10.02 6504.41 25.41 0.00 0.00 19674.07 17277.67 42181.35 00:09:19.838 =================================================================================================================== 00:09:19.838 Total : 6504.41 25.41 0.00 0.00 19674.07 17277.67 42181.35 00:09:19.838 { 00:09:19.838 "results": [ 00:09:19.838 { 00:09:19.838 "job": "Nvme0n1", 00:09:19.838 "core_mask": "0x2", 00:09:19.838 "workload": "randwrite", 00:09:19.838 "status": "finished", 00:09:19.838 "queue_depth": 128, 00:09:19.838 "io_size": 4096, 00:09:19.838 "runtime": 10.016596, 00:09:19.838 "iops": 6504.405288982405, 00:09:19.838 "mibps": 25.40783316008752, 00:09:19.838 "io_failed": 0, 00:09:19.838 "io_timeout": 0, 00:09:19.838 "avg_latency_us": 19674.07231291302, 00:09:19.838 "min_latency_us": 17277.672727272726, 00:09:19.838 "max_latency_us": 42181.35272727273 00:09:19.838 } 00:09:19.838 ], 00:09:19.838 "core_count": 1 00:09:19.838 } 00:09:19.838 06:01:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 75219 00:09:19.838 06:01:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@950 -- # '[' -z 75219 ']' 00:09:19.838 06:01:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # kill -0 75219 00:09:19.838 06:01:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # uname 00:09:19.838 06:01:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:19.838 06:01:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75219 00:09:19.838 06:01:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:09:19.838 06:01:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:09:19.838 killing process with pid 75219 00:09:19.838 06:01:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75219' 00:09:19.838 06:01:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@969 -- # kill 75219 00:09:19.838 Received shutdown signal, test time was about 10.000000 seconds 00:09:19.838 00:09:19.838 Latency(us) 00:09:19.838 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:19.838 =================================================================================================================== 00:09:19.838 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:19.838 06:01:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@974 -- # wait 75219 00:09:20.097 06:01:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:09:20.356 06:01:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:20.614 06:01:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8565a062-790d-4899-8f15-9d00b64b0887 00:09:20.614 06:01:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:09:20.872 06:01:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:09:20.872 06:01:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:09:20.872 06:01:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:20.872 [2024-10-01 06:01:46.467447] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:09:21.130 06:01:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8565a062-790d-4899-8f15-9d00b64b0887 00:09:21.130 06:01:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:09:21.130 06:01:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8565a062-790d-4899-8f15-9d00b64b0887 00:09:21.130 06:01:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:21.130 06:01:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:21.130 06:01:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:21.130 06:01:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:21.130 06:01:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:21.130 06:01:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:21.130 06:01:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:21.130 06:01:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:09:21.131 06:01:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8565a062-790d-4899-8f15-9d00b64b0887 00:09:21.131 request: 00:09:21.131 { 00:09:21.131 "uuid": "8565a062-790d-4899-8f15-9d00b64b0887", 00:09:21.131 "method": "bdev_lvol_get_lvstores", 00:09:21.131 "req_id": 1 00:09:21.131 } 00:09:21.131 Got JSON-RPC error response 00:09:21.131 response: 00:09:21.131 { 00:09:21.131 "code": -19, 00:09:21.131 "message": "No such device" 00:09:21.131 } 00:09:21.131 06:01:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:09:21.131 06:01:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:21.131 06:01:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:21.131 06:01:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:21.131 06:01:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:21.390 aio_bdev 00:09:21.390 06:01:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 20e47be6-f4a6-4240-a848-569945f81b70 00:09:21.390 06:01:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local bdev_name=20e47be6-f4a6-4240-a848-569945f81b70 00:09:21.390 06:01:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:21.390 06:01:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local i 00:09:21.390 06:01:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:21.390 06:01:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:21.390 06:01:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:21.649 06:01:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 20e47be6-f4a6-4240-a848-569945f81b70 -t 2000 00:09:21.907 [ 00:09:21.907 { 00:09:21.907 "name": "20e47be6-f4a6-4240-a848-569945f81b70", 00:09:21.907 "aliases": [ 00:09:21.907 "lvs/lvol" 00:09:21.907 ], 00:09:21.907 "product_name": "Logical Volume", 00:09:21.907 "block_size": 4096, 00:09:21.907 "num_blocks": 38912, 00:09:21.907 "uuid": "20e47be6-f4a6-4240-a848-569945f81b70", 00:09:21.907 "assigned_rate_limits": { 00:09:21.907 "rw_ios_per_sec": 0, 00:09:21.907 "rw_mbytes_per_sec": 0, 00:09:21.907 "r_mbytes_per_sec": 0, 00:09:21.907 "w_mbytes_per_sec": 0 00:09:21.907 }, 00:09:21.907 "claimed": false, 00:09:21.907 "zoned": false, 00:09:21.907 "supported_io_types": { 00:09:21.907 "read": true, 00:09:21.907 "write": true, 00:09:21.907 "unmap": true, 00:09:21.907 "flush": false, 00:09:21.907 "reset": true, 00:09:21.907 "nvme_admin": false, 00:09:21.907 "nvme_io": false, 00:09:21.908 "nvme_io_md": false, 00:09:21.908 "write_zeroes": true, 00:09:21.908 "zcopy": false, 00:09:21.908 "get_zone_info": false, 00:09:21.908 "zone_management": false, 00:09:21.908 "zone_append": false, 00:09:21.908 "compare": false, 00:09:21.908 "compare_and_write": false, 00:09:21.908 "abort": false, 00:09:21.908 "seek_hole": true, 00:09:21.908 "seek_data": true, 00:09:21.908 "copy": false, 00:09:21.908 "nvme_iov_md": false 00:09:21.908 }, 00:09:21.908 "driver_specific": { 00:09:21.908 "lvol": { 00:09:21.908 "lvol_store_uuid": "8565a062-790d-4899-8f15-9d00b64b0887", 00:09:21.908 "base_bdev": "aio_bdev", 00:09:21.908 "thin_provision": false, 00:09:21.908 "num_allocated_clusters": 38, 00:09:21.908 "snapshot": false, 00:09:21.908 "clone": false, 00:09:21.908 "esnap_clone": false 00:09:21.908 } 00:09:21.908 } 00:09:21.908 } 00:09:21.908 ] 00:09:21.908 06:01:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@907 -- # return 0 00:09:21.908 06:01:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8565a062-790d-4899-8f15-9d00b64b0887 00:09:21.908 06:01:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:09:22.166 06:01:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:09:22.166 06:01:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:09:22.166 06:01:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8565a062-790d-4899-8f15-9d00b64b0887 00:09:22.425 06:01:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:09:22.425 06:01:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 20e47be6-f4a6-4240-a848-569945f81b70 00:09:22.684 06:01:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 8565a062-790d-4899-8f15-9d00b64b0887 00:09:22.942 06:01:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:23.201 06:01:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:23.767 ************************************ 00:09:23.767 END TEST lvs_grow_clean 00:09:23.767 ************************************ 00:09:23.767 00:09:23.767 real 0m17.539s 00:09:23.767 user 0m16.399s 00:09:23.767 sys 0m2.358s 00:09:23.767 06:01:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:23.767 06:01:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:09:23.767 06:01:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:09:23.767 06:01:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:23.767 06:01:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:23.767 06:01:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:23.767 ************************************ 00:09:23.767 START TEST lvs_grow_dirty 00:09:23.767 ************************************ 00:09:23.767 06:01:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1125 -- # lvs_grow dirty 00:09:23.767 06:01:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:09:23.767 06:01:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:09:23.767 06:01:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:09:23.768 06:01:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:09:23.768 06:01:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:09:23.768 06:01:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:09:23.768 06:01:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:23.768 06:01:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:23.768 06:01:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:24.026 06:01:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:09:24.026 06:01:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:09:24.285 06:01:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=60b9d08c-c8e6-4f1a-9af2-e1d787e7b0f6 00:09:24.285 06:01:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 60b9d08c-c8e6-4f1a-9af2-e1d787e7b0f6 00:09:24.285 06:01:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:09:24.544 06:01:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:09:24.544 06:01:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:09:24.544 06:01:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 60b9d08c-c8e6-4f1a-9af2-e1d787e7b0f6 lvol 150 00:09:24.803 06:01:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=9bef166f-63cd-4e37-a831-bd3061717fca 00:09:24.803 06:01:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:24.803 06:01:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:09:25.064 [2024-10-01 06:01:50.568894] bdev_aio.c:1044:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:09:25.064 [2024-10-01 06:01:50.569012] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:09:25.064 true 00:09:25.064 06:01:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:09:25.064 06:01:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 60b9d08c-c8e6-4f1a-9af2-e1d787e7b0f6 00:09:25.328 06:01:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:09:25.328 06:01:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:25.586 06:01:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 9bef166f-63cd-4e37-a831-bd3061717fca 00:09:25.845 06:01:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:09:26.103 [2024-10-01 06:01:51.505476] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:09:26.103 06:01:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:09:26.362 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:26.362 06:01:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=75478 00:09:26.362 06:01:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:09:26.362 06:01:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:26.362 06:01:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 75478 /var/tmp/bdevperf.sock 00:09:26.362 06:01:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 75478 ']' 00:09:26.362 06:01:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:26.362 06:01:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:26.362 06:01:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:26.362 06:01:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:26.362 06:01:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:26.362 [2024-10-01 06:01:51.879889] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:09:26.362 [2024-10-01 06:01:51.880185] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75478 ] 00:09:26.621 [2024-10-01 06:01:52.007954] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:26.621 [2024-10-01 06:01:52.045829] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:09:26.621 [2024-10-01 06:01:52.074949] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:26.621 06:01:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:26.621 06:01:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:09:26.621 06:01:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:09:26.879 Nvme0n1 00:09:26.879 06:01:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:09:27.137 [ 00:09:27.137 { 00:09:27.137 "name": "Nvme0n1", 00:09:27.137 "aliases": [ 00:09:27.137 "9bef166f-63cd-4e37-a831-bd3061717fca" 00:09:27.137 ], 00:09:27.137 "product_name": "NVMe disk", 00:09:27.137 "block_size": 4096, 00:09:27.137 "num_blocks": 38912, 00:09:27.137 "uuid": "9bef166f-63cd-4e37-a831-bd3061717fca", 00:09:27.137 "numa_id": -1, 00:09:27.137 "assigned_rate_limits": { 00:09:27.137 "rw_ios_per_sec": 0, 00:09:27.137 "rw_mbytes_per_sec": 0, 00:09:27.138 "r_mbytes_per_sec": 0, 00:09:27.138 "w_mbytes_per_sec": 0 00:09:27.138 }, 00:09:27.138 "claimed": false, 00:09:27.138 "zoned": false, 00:09:27.138 "supported_io_types": { 00:09:27.138 "read": true, 00:09:27.138 "write": true, 00:09:27.138 "unmap": true, 00:09:27.138 "flush": true, 00:09:27.138 "reset": true, 00:09:27.138 "nvme_admin": true, 00:09:27.138 "nvme_io": true, 00:09:27.138 "nvme_io_md": false, 00:09:27.138 "write_zeroes": true, 00:09:27.138 "zcopy": false, 00:09:27.138 "get_zone_info": false, 00:09:27.138 "zone_management": false, 00:09:27.138 "zone_append": false, 00:09:27.138 "compare": true, 00:09:27.138 "compare_and_write": true, 00:09:27.138 "abort": true, 00:09:27.138 "seek_hole": false, 00:09:27.138 "seek_data": false, 00:09:27.138 "copy": true, 00:09:27.138 "nvme_iov_md": false 00:09:27.138 }, 00:09:27.138 "memory_domains": [ 00:09:27.138 { 00:09:27.138 "dma_device_id": "system", 00:09:27.138 "dma_device_type": 1 00:09:27.138 } 00:09:27.138 ], 00:09:27.138 "driver_specific": { 00:09:27.138 "nvme": [ 00:09:27.138 { 00:09:27.138 "trid": { 00:09:27.138 "trtype": "TCP", 00:09:27.138 "adrfam": "IPv4", 00:09:27.138 "traddr": "10.0.0.3", 00:09:27.138 "trsvcid": "4420", 00:09:27.138 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:09:27.138 }, 00:09:27.138 "ctrlr_data": { 00:09:27.138 "cntlid": 1, 00:09:27.138 "vendor_id": "0x8086", 00:09:27.138 "model_number": "SPDK bdev Controller", 00:09:27.138 "serial_number": "SPDK0", 00:09:27.138 "firmware_revision": "25.01", 00:09:27.138 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:27.138 "oacs": { 00:09:27.138 "security": 0, 00:09:27.138 "format": 0, 00:09:27.138 "firmware": 0, 00:09:27.138 "ns_manage": 0 00:09:27.138 }, 00:09:27.138 "multi_ctrlr": true, 00:09:27.138 "ana_reporting": false 00:09:27.138 }, 00:09:27.138 "vs": { 00:09:27.138 "nvme_version": "1.3" 00:09:27.138 }, 00:09:27.138 "ns_data": { 00:09:27.138 "id": 1, 00:09:27.138 "can_share": true 00:09:27.138 } 00:09:27.138 } 00:09:27.138 ], 00:09:27.138 "mp_policy": "active_passive" 00:09:27.138 } 00:09:27.138 } 00:09:27.138 ] 00:09:27.138 06:01:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=75493 00:09:27.138 06:01:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:27.138 06:01:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:09:27.397 Running I/O for 10 seconds... 00:09:28.332 Latency(us) 00:09:28.332 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:28.332 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:28.332 Nvme0n1 : 1.00 6858.00 26.79 0.00 0.00 0.00 0.00 0.00 00:09:28.332 =================================================================================================================== 00:09:28.332 Total : 6858.00 26.79 0.00 0.00 0.00 0.00 0.00 00:09:28.332 00:09:29.269 06:01:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 60b9d08c-c8e6-4f1a-9af2-e1d787e7b0f6 00:09:29.269 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:29.269 Nvme0n1 : 2.00 6731.00 26.29 0.00 0.00 0.00 0.00 0.00 00:09:29.269 =================================================================================================================== 00:09:29.269 Total : 6731.00 26.29 0.00 0.00 0.00 0.00 0.00 00:09:29.269 00:09:29.836 true 00:09:29.836 06:01:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:09:29.836 06:01:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 60b9d08c-c8e6-4f1a-9af2-e1d787e7b0f6 00:09:30.094 06:01:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:09:30.094 06:01:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:09:30.094 06:01:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 75493 00:09:30.353 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:30.353 Nvme0n1 : 3.00 6688.67 26.13 0.00 0.00 0.00 0.00 0.00 00:09:30.353 =================================================================================================================== 00:09:30.353 Total : 6688.67 26.13 0.00 0.00 0.00 0.00 0.00 00:09:30.353 00:09:31.290 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:31.290 Nvme0n1 : 4.00 6384.75 24.94 0.00 0.00 0.00 0.00 0.00 00:09:31.290 =================================================================================================================== 00:09:31.290 Total : 6384.75 24.94 0.00 0.00 0.00 0.00 0.00 00:09:31.290 00:09:32.666 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:32.666 Nvme0n1 : 5.00 6377.80 24.91 0.00 0.00 0.00 0.00 0.00 00:09:32.666 =================================================================================================================== 00:09:32.666 Total : 6377.80 24.91 0.00 0.00 0.00 0.00 0.00 00:09:32.666 00:09:33.233 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:33.233 Nvme0n1 : 6.00 6352.00 24.81 0.00 0.00 0.00 0.00 0.00 00:09:33.233 =================================================================================================================== 00:09:33.233 Total : 6352.00 24.81 0.00 0.00 0.00 0.00 0.00 00:09:33.233 00:09:34.280 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:34.280 Nvme0n1 : 7.00 6388.00 24.95 0.00 0.00 0.00 0.00 0.00 00:09:34.280 =================================================================================================================== 00:09:34.280 Total : 6388.00 24.95 0.00 0.00 0.00 0.00 0.00 00:09:34.280 00:09:35.235 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:35.235 Nvme0n1 : 8.00 6383.25 24.93 0.00 0.00 0.00 0.00 0.00 00:09:35.235 =================================================================================================================== 00:09:35.235 Total : 6383.25 24.93 0.00 0.00 0.00 0.00 0.00 00:09:35.235 00:09:36.610 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:36.610 Nvme0n1 : 9.00 6329.89 24.73 0.00 0.00 0.00 0.00 0.00 00:09:36.610 =================================================================================================================== 00:09:36.611 Total : 6329.89 24.73 0.00 0.00 0.00 0.00 0.00 00:09:36.611 00:09:37.546 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:37.546 Nvme0n1 : 10.00 6331.90 24.73 0.00 0.00 0.00 0.00 0.00 00:09:37.546 =================================================================================================================== 00:09:37.546 Total : 6331.90 24.73 0.00 0.00 0.00 0.00 0.00 00:09:37.546 00:09:37.546 00:09:37.546 Latency(us) 00:09:37.546 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:37.546 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:37.546 Nvme0n1 : 10.01 6337.39 24.76 0.00 0.00 20192.36 11915.64 183977.43 00:09:37.546 =================================================================================================================== 00:09:37.546 Total : 6337.39 24.76 0.00 0.00 20192.36 11915.64 183977.43 00:09:37.546 { 00:09:37.546 "results": [ 00:09:37.546 { 00:09:37.546 "job": "Nvme0n1", 00:09:37.546 "core_mask": "0x2", 00:09:37.546 "workload": "randwrite", 00:09:37.546 "status": "finished", 00:09:37.546 "queue_depth": 128, 00:09:37.546 "io_size": 4096, 00:09:37.546 "runtime": 10.011536, 00:09:37.546 "iops": 6337.389187832916, 00:09:37.546 "mibps": 24.755426514972328, 00:09:37.546 "io_failed": 0, 00:09:37.546 "io_timeout": 0, 00:09:37.546 "avg_latency_us": 20192.362985512602, 00:09:37.546 "min_latency_us": 11915.636363636364, 00:09:37.546 "max_latency_us": 183977.42545454545 00:09:37.546 } 00:09:37.546 ], 00:09:37.546 "core_count": 1 00:09:37.546 } 00:09:37.546 06:02:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 75478 00:09:37.546 06:02:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@950 -- # '[' -z 75478 ']' 00:09:37.546 06:02:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # kill -0 75478 00:09:37.546 06:02:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # uname 00:09:37.546 06:02:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:37.546 06:02:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75478 00:09:37.546 killing process with pid 75478 00:09:37.546 Received shutdown signal, test time was about 10.000000 seconds 00:09:37.546 00:09:37.546 Latency(us) 00:09:37.546 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:37.546 =================================================================================================================== 00:09:37.546 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:37.546 06:02:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:09:37.546 06:02:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:09:37.546 06:02:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75478' 00:09:37.546 06:02:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@969 -- # kill 75478 00:09:37.546 06:02:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@974 -- # wait 75478 00:09:37.546 06:02:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:09:37.805 06:02:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:38.064 06:02:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 60b9d08c-c8e6-4f1a-9af2-e1d787e7b0f6 00:09:38.064 06:02:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:09:38.322 06:02:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:09:38.322 06:02:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:09:38.322 06:02:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 75143 00:09:38.322 06:02:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 75143 00:09:38.581 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 75143 Killed "${NVMF_APP[@]}" "$@" 00:09:38.581 06:02:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:09:38.581 06:02:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:09:38.581 06:02:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:09:38.581 06:02:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:38.581 06:02:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:38.581 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:38.581 06:02:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@505 -- # nvmfpid=75631 00:09:38.581 06:02:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:09:38.581 06:02:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@506 -- # waitforlisten 75631 00:09:38.581 06:02:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 75631 ']' 00:09:38.581 06:02:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:38.581 06:02:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:38.581 06:02:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:38.581 06:02:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:38.581 06:02:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:38.581 [2024-10-01 06:02:03.987493] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:09:38.581 [2024-10-01 06:02:03.987746] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:38.581 [2024-10-01 06:02:04.123902] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:38.581 [2024-10-01 06:02:04.158121] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:38.581 [2024-10-01 06:02:04.158542] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:38.581 [2024-10-01 06:02:04.158813] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:38.581 [2024-10-01 06:02:04.159155] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:38.581 [2024-10-01 06:02:04.159400] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:38.581 [2024-10-01 06:02:04.159692] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:38.581 [2024-10-01 06:02:04.186527] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:38.839 06:02:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:38.839 06:02:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:09:38.839 06:02:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:09:38.840 06:02:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:38.840 06:02:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:38.840 06:02:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:38.840 06:02:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:39.098 [2024-10-01 06:02:04.508787] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:09:39.098 [2024-10-01 06:02:04.509335] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:09:39.098 [2024-10-01 06:02:04.509710] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:09:39.098 06:02:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:09:39.098 06:02:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 9bef166f-63cd-4e37-a831-bd3061717fca 00:09:39.098 06:02:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=9bef166f-63cd-4e37-a831-bd3061717fca 00:09:39.098 06:02:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:39.098 06:02:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:09:39.098 06:02:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:39.098 06:02:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:39.098 06:02:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:39.357 06:02:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 9bef166f-63cd-4e37-a831-bd3061717fca -t 2000 00:09:39.616 [ 00:09:39.616 { 00:09:39.616 "name": "9bef166f-63cd-4e37-a831-bd3061717fca", 00:09:39.616 "aliases": [ 00:09:39.616 "lvs/lvol" 00:09:39.616 ], 00:09:39.616 "product_name": "Logical Volume", 00:09:39.616 "block_size": 4096, 00:09:39.616 "num_blocks": 38912, 00:09:39.616 "uuid": "9bef166f-63cd-4e37-a831-bd3061717fca", 00:09:39.616 "assigned_rate_limits": { 00:09:39.616 "rw_ios_per_sec": 0, 00:09:39.616 "rw_mbytes_per_sec": 0, 00:09:39.616 "r_mbytes_per_sec": 0, 00:09:39.616 "w_mbytes_per_sec": 0 00:09:39.616 }, 00:09:39.616 "claimed": false, 00:09:39.616 "zoned": false, 00:09:39.616 "supported_io_types": { 00:09:39.616 "read": true, 00:09:39.616 "write": true, 00:09:39.616 "unmap": true, 00:09:39.616 "flush": false, 00:09:39.616 "reset": true, 00:09:39.616 "nvme_admin": false, 00:09:39.616 "nvme_io": false, 00:09:39.616 "nvme_io_md": false, 00:09:39.616 "write_zeroes": true, 00:09:39.616 "zcopy": false, 00:09:39.616 "get_zone_info": false, 00:09:39.616 "zone_management": false, 00:09:39.616 "zone_append": false, 00:09:39.616 "compare": false, 00:09:39.616 "compare_and_write": false, 00:09:39.616 "abort": false, 00:09:39.616 "seek_hole": true, 00:09:39.616 "seek_data": true, 00:09:39.616 "copy": false, 00:09:39.616 "nvme_iov_md": false 00:09:39.616 }, 00:09:39.616 "driver_specific": { 00:09:39.616 "lvol": { 00:09:39.616 "lvol_store_uuid": "60b9d08c-c8e6-4f1a-9af2-e1d787e7b0f6", 00:09:39.616 "base_bdev": "aio_bdev", 00:09:39.616 "thin_provision": false, 00:09:39.616 "num_allocated_clusters": 38, 00:09:39.616 "snapshot": false, 00:09:39.616 "clone": false, 00:09:39.616 "esnap_clone": false 00:09:39.616 } 00:09:39.616 } 00:09:39.616 } 00:09:39.616 ] 00:09:39.616 06:02:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:09:39.616 06:02:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 60b9d08c-c8e6-4f1a-9af2-e1d787e7b0f6 00:09:39.616 06:02:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:09:39.875 06:02:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:09:39.875 06:02:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:09:39.875 06:02:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 60b9d08c-c8e6-4f1a-9af2-e1d787e7b0f6 00:09:40.133 06:02:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:09:40.133 06:02:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:40.392 [2024-10-01 06:02:05.858886] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:09:40.392 06:02:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 60b9d08c-c8e6-4f1a-9af2-e1d787e7b0f6 00:09:40.392 06:02:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:09:40.392 06:02:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 60b9d08c-c8e6-4f1a-9af2-e1d787e7b0f6 00:09:40.392 06:02:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:40.392 06:02:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:40.392 06:02:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:40.392 06:02:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:40.392 06:02:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:40.392 06:02:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:40.392 06:02:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:40.392 06:02:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:09:40.392 06:02:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 60b9d08c-c8e6-4f1a-9af2-e1d787e7b0f6 00:09:40.650 request: 00:09:40.650 { 00:09:40.650 "uuid": "60b9d08c-c8e6-4f1a-9af2-e1d787e7b0f6", 00:09:40.650 "method": "bdev_lvol_get_lvstores", 00:09:40.650 "req_id": 1 00:09:40.650 } 00:09:40.650 Got JSON-RPC error response 00:09:40.650 response: 00:09:40.650 { 00:09:40.650 "code": -19, 00:09:40.650 "message": "No such device" 00:09:40.650 } 00:09:40.650 06:02:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:09:40.650 06:02:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:40.650 06:02:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:40.650 06:02:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:40.650 06:02:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:40.908 aio_bdev 00:09:40.908 06:02:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 9bef166f-63cd-4e37-a831-bd3061717fca 00:09:40.908 06:02:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=9bef166f-63cd-4e37-a831-bd3061717fca 00:09:40.909 06:02:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:40.909 06:02:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:09:40.909 06:02:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:40.909 06:02:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:40.909 06:02:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:41.167 06:02:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 9bef166f-63cd-4e37-a831-bd3061717fca -t 2000 00:09:41.426 [ 00:09:41.426 { 00:09:41.426 "name": "9bef166f-63cd-4e37-a831-bd3061717fca", 00:09:41.426 "aliases": [ 00:09:41.426 "lvs/lvol" 00:09:41.426 ], 00:09:41.426 "product_name": "Logical Volume", 00:09:41.426 "block_size": 4096, 00:09:41.426 "num_blocks": 38912, 00:09:41.426 "uuid": "9bef166f-63cd-4e37-a831-bd3061717fca", 00:09:41.426 "assigned_rate_limits": { 00:09:41.426 "rw_ios_per_sec": 0, 00:09:41.426 "rw_mbytes_per_sec": 0, 00:09:41.426 "r_mbytes_per_sec": 0, 00:09:41.426 "w_mbytes_per_sec": 0 00:09:41.426 }, 00:09:41.426 "claimed": false, 00:09:41.426 "zoned": false, 00:09:41.426 "supported_io_types": { 00:09:41.426 "read": true, 00:09:41.426 "write": true, 00:09:41.426 "unmap": true, 00:09:41.426 "flush": false, 00:09:41.426 "reset": true, 00:09:41.426 "nvme_admin": false, 00:09:41.426 "nvme_io": false, 00:09:41.426 "nvme_io_md": false, 00:09:41.426 "write_zeroes": true, 00:09:41.426 "zcopy": false, 00:09:41.426 "get_zone_info": false, 00:09:41.426 "zone_management": false, 00:09:41.426 "zone_append": false, 00:09:41.426 "compare": false, 00:09:41.426 "compare_and_write": false, 00:09:41.426 "abort": false, 00:09:41.426 "seek_hole": true, 00:09:41.426 "seek_data": true, 00:09:41.426 "copy": false, 00:09:41.426 "nvme_iov_md": false 00:09:41.426 }, 00:09:41.426 "driver_specific": { 00:09:41.426 "lvol": { 00:09:41.426 "lvol_store_uuid": "60b9d08c-c8e6-4f1a-9af2-e1d787e7b0f6", 00:09:41.426 "base_bdev": "aio_bdev", 00:09:41.426 "thin_provision": false, 00:09:41.426 "num_allocated_clusters": 38, 00:09:41.426 "snapshot": false, 00:09:41.426 "clone": false, 00:09:41.426 "esnap_clone": false 00:09:41.426 } 00:09:41.426 } 00:09:41.426 } 00:09:41.426 ] 00:09:41.426 06:02:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:09:41.426 06:02:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 60b9d08c-c8e6-4f1a-9af2-e1d787e7b0f6 00:09:41.426 06:02:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:09:41.685 06:02:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:09:41.685 06:02:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 60b9d08c-c8e6-4f1a-9af2-e1d787e7b0f6 00:09:41.685 06:02:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:09:41.943 06:02:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:09:41.943 06:02:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 9bef166f-63cd-4e37-a831-bd3061717fca 00:09:42.202 06:02:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 60b9d08c-c8e6-4f1a-9af2-e1d787e7b0f6 00:09:42.461 06:02:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:42.719 06:02:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:42.977 ************************************ 00:09:42.977 END TEST lvs_grow_dirty 00:09:42.977 ************************************ 00:09:42.977 00:09:42.977 real 0m19.369s 00:09:42.977 user 0m38.630s 00:09:42.977 sys 0m9.558s 00:09:42.977 06:02:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:42.977 06:02:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:43.235 06:02:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:09:43.235 06:02:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # type=--id 00:09:43.235 06:02:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@809 -- # id=0 00:09:43.235 06:02:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:09:43.235 06:02:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:09:43.235 06:02:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:09:43.235 06:02:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:09:43.235 06:02:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # for n in $shm_files 00:09:43.235 06:02:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:09:43.235 nvmf_trace.0 00:09:43.235 06:02:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # return 0 00:09:43.235 06:02:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:09:43.235 06:02:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # nvmfcleanup 00:09:43.235 06:02:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:09:43.802 06:02:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:43.802 06:02:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:09:43.802 06:02:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:43.802 06:02:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:43.802 rmmod nvme_tcp 00:09:43.802 rmmod nvme_fabrics 00:09:43.802 rmmod nvme_keyring 00:09:43.802 06:02:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:43.802 06:02:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:09:43.802 06:02:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:09:43.802 06:02:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@513 -- # '[' -n 75631 ']' 00:09:43.802 06:02:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@514 -- # killprocess 75631 00:09:43.802 06:02:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@950 -- # '[' -z 75631 ']' 00:09:43.802 06:02:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # kill -0 75631 00:09:43.802 06:02:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # uname 00:09:43.802 06:02:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:43.802 06:02:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75631 00:09:43.802 killing process with pid 75631 00:09:43.802 06:02:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:43.802 06:02:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:43.802 06:02:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75631' 00:09:43.802 06:02:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@969 -- # kill 75631 00:09:43.802 06:02:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@974 -- # wait 75631 00:09:44.061 06:02:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:09:44.061 06:02:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:09:44.061 06:02:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:09:44.061 06:02:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:09:44.061 06:02:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@787 -- # iptables-save 00:09:44.061 06:02:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:09:44.061 06:02:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@787 -- # iptables-restore 00:09:44.061 06:02:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:44.061 06:02:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:09:44.061 06:02:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:09:44.061 06:02:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:09:44.061 06:02:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:09:44.061 06:02:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:09:44.061 06:02:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:09:44.061 06:02:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:09:44.061 06:02:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:09:44.061 06:02:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:09:44.061 06:02:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:09:44.061 06:02:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:09:44.319 06:02:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:09:44.319 06:02:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:44.319 06:02:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:44.319 06:02:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@246 -- # remove_spdk_ns 00:09:44.319 06:02:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:44.319 06:02:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:44.319 06:02:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:44.319 06:02:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@300 -- # return 0 00:09:44.319 ************************************ 00:09:44.319 END TEST nvmf_lvs_grow 00:09:44.319 ************************************ 00:09:44.319 00:09:44.319 real 0m39.485s 00:09:44.319 user 1m1.164s 00:09:44.319 sys 0m13.170s 00:09:44.319 06:02:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:44.319 06:02:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:44.319 06:02:09 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:09:44.319 06:02:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:44.319 06:02:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:44.319 06:02:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:44.319 ************************************ 00:09:44.319 START TEST nvmf_bdev_io_wait 00:09:44.319 ************************************ 00:09:44.319 06:02:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:09:44.319 * Looking for test storage... 00:09:44.319 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:44.319 06:02:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:44.319 06:02:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1681 -- # lcov --version 00:09:44.319 06:02:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:44.578 06:02:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:44.578 06:02:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:44.578 06:02:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:44.578 06:02:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:44.578 06:02:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:09:44.578 06:02:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:09:44.578 06:02:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:09:44.579 06:02:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:09:44.579 06:02:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:09:44.579 06:02:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:09:44.579 06:02:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:09:44.579 06:02:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:44.579 06:02:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:09:44.579 06:02:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:09:44.579 06:02:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:44.579 06:02:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:44.579 06:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:09:44.579 06:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:09:44.579 06:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:44.579 06:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:09:44.579 06:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:09:44.579 06:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:09:44.579 06:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:09:44.579 06:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:44.579 06:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:09:44.579 06:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:09:44.579 06:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:44.579 06:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:44.579 06:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:09:44.579 06:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:44.579 06:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:44.579 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:44.579 --rc genhtml_branch_coverage=1 00:09:44.579 --rc genhtml_function_coverage=1 00:09:44.579 --rc genhtml_legend=1 00:09:44.579 --rc geninfo_all_blocks=1 00:09:44.579 --rc geninfo_unexecuted_blocks=1 00:09:44.579 00:09:44.579 ' 00:09:44.579 06:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:44.579 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:44.579 --rc genhtml_branch_coverage=1 00:09:44.579 --rc genhtml_function_coverage=1 00:09:44.579 --rc genhtml_legend=1 00:09:44.579 --rc geninfo_all_blocks=1 00:09:44.579 --rc geninfo_unexecuted_blocks=1 00:09:44.579 00:09:44.579 ' 00:09:44.579 06:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:44.579 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:44.579 --rc genhtml_branch_coverage=1 00:09:44.579 --rc genhtml_function_coverage=1 00:09:44.579 --rc genhtml_legend=1 00:09:44.579 --rc geninfo_all_blocks=1 00:09:44.579 --rc geninfo_unexecuted_blocks=1 00:09:44.579 00:09:44.579 ' 00:09:44.579 06:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:44.579 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:44.579 --rc genhtml_branch_coverage=1 00:09:44.579 --rc genhtml_function_coverage=1 00:09:44.579 --rc genhtml_legend=1 00:09:44.579 --rc geninfo_all_blocks=1 00:09:44.579 --rc geninfo_unexecuted_blocks=1 00:09:44.579 00:09:44.579 ' 00:09:44.579 06:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:44.579 06:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:09:44.579 06:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:44.579 06:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:44.579 06:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:44.579 06:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:44.579 06:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:44.579 06:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:44.579 06:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:44.579 06:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:44.579 06:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:44.579 06:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:44.579 06:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 00:09:44.579 06:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=a979a798-a221-4879-b3c4-5aaa753fde06 00:09:44.579 06:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:44.579 06:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:44.579 06:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:44.579 06:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:44.579 06:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:44.579 06:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:09:44.579 06:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:44.579 06:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:44.579 06:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:44.579 06:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:44.579 06:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:44.579 06:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:44.579 06:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:09:44.579 06:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:44.579 06:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:09:44.579 06:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:44.579 06:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:44.579 06:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:44.579 06:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:44.579 06:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:44.579 06:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:44.579 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:44.579 06:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:44.579 06:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:44.579 06:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:44.579 06:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:44.579 06:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:44.579 06:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:09:44.579 06:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:09:44.579 06:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:44.579 06:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@472 -- # prepare_net_devs 00:09:44.579 06:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@434 -- # local -g is_hw=no 00:09:44.579 06:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@436 -- # remove_spdk_ns 00:09:44.579 06:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:44.579 06:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:44.579 06:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:44.580 06:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:09:44.580 06:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:09:44.580 06:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:09:44.580 06:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:09:44.580 06:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:09:44.580 06:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@456 -- # nvmf_veth_init 00:09:44.580 06:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:44.580 06:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:09:44.580 06:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:09:44.580 06:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:09:44.580 06:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:44.580 06:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:09:44.580 06:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:44.580 06:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:09:44.580 06:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:44.580 06:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:09:44.580 06:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:44.580 06:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:44.580 06:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:44.580 06:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:44.580 06:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:44.580 06:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:44.580 06:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:09:44.580 Cannot find device "nvmf_init_br" 00:09:44.580 06:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # true 00:09:44.580 06:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:09:44.580 Cannot find device "nvmf_init_br2" 00:09:44.580 06:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # true 00:09:44.580 06:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:09:44.580 Cannot find device "nvmf_tgt_br" 00:09:44.580 06:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@164 -- # true 00:09:44.580 06:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:09:44.580 Cannot find device "nvmf_tgt_br2" 00:09:44.580 06:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@165 -- # true 00:09:44.580 06:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:09:44.580 Cannot find device "nvmf_init_br" 00:09:44.580 06:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # true 00:09:44.580 06:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:09:44.580 Cannot find device "nvmf_init_br2" 00:09:44.580 06:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@167 -- # true 00:09:44.580 06:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:09:44.580 Cannot find device "nvmf_tgt_br" 00:09:44.580 06:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@168 -- # true 00:09:44.580 06:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:09:44.580 Cannot find device "nvmf_tgt_br2" 00:09:44.580 06:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # true 00:09:44.580 06:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:09:44.580 Cannot find device "nvmf_br" 00:09:44.580 06:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # true 00:09:44.580 06:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:09:44.580 Cannot find device "nvmf_init_if" 00:09:44.580 06:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # true 00:09:44.580 06:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:09:44.580 Cannot find device "nvmf_init_if2" 00:09:44.580 06:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@172 -- # true 00:09:44.580 06:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:44.580 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:44.839 06:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@173 -- # true 00:09:44.839 06:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:44.839 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:44.839 06:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # true 00:09:44.839 06:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:09:44.839 06:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:44.839 06:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:09:44.839 06:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:44.839 06:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:44.839 06:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:44.839 06:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:44.839 06:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:44.839 06:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:09:44.839 06:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:09:44.839 06:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:09:44.839 06:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:09:44.839 06:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:09:44.839 06:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:09:44.839 06:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:09:44.839 06:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:09:44.839 06:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:09:44.839 06:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:44.839 06:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:44.839 06:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:44.839 06:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:09:44.839 06:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:09:44.839 06:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:09:44.839 06:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:09:44.839 06:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:44.839 06:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:44.839 06:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:44.839 06:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:09:44.839 06:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:09:44.839 06:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:09:44.839 06:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:44.839 06:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:09:44.839 06:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:09:44.839 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:44.839 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.077 ms 00:09:44.839 00:09:44.839 --- 10.0.0.3 ping statistics --- 00:09:44.839 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:44.839 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:09:44.839 06:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:09:44.839 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:09:44.839 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.039 ms 00:09:44.839 00:09:44.839 --- 10.0.0.4 ping statistics --- 00:09:44.839 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:44.839 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:09:44.839 06:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:44.839 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:44.839 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:09:44.839 00:09:44.839 --- 10.0.0.1 ping statistics --- 00:09:44.839 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:44.839 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:09:44.839 06:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:09:44.839 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:44.839 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.048 ms 00:09:44.839 00:09:44.839 --- 10.0.0.2 ping statistics --- 00:09:44.839 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:44.839 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:09:44.839 06:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:44.839 06:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@457 -- # return 0 00:09:44.839 06:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:09:44.839 06:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:44.839 06:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:09:44.839 06:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:09:44.839 06:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:44.839 06:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:09:44.839 06:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:09:44.839 06:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:09:44.839 06:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:09:44.839 06:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:44.839 06:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:44.839 06:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@505 -- # nvmfpid=76006 00:09:44.839 06:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:09:44.839 06:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@506 -- # waitforlisten 76006 00:09:44.839 06:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@831 -- # '[' -z 76006 ']' 00:09:44.840 06:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:44.840 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:44.840 06:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:44.840 06:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:44.840 06:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:44.840 06:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:45.097 [2024-10-01 06:02:10.498655] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:09:45.097 [2024-10-01 06:02:10.498954] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:45.097 [2024-10-01 06:02:10.641365] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:45.097 [2024-10-01 06:02:10.685701] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:45.097 [2024-10-01 06:02:10.685763] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:45.097 [2024-10-01 06:02:10.685777] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:45.097 [2024-10-01 06:02:10.685787] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:45.097 [2024-10-01 06:02:10.685797] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:45.097 [2024-10-01 06:02:10.685958] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:09:45.097 [2024-10-01 06:02:10.686283] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:09:45.097 [2024-10-01 06:02:10.686697] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:09:45.097 [2024-10-01 06:02:10.686738] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:45.356 06:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:45.356 06:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # return 0 00:09:45.356 06:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:09:45.356 06:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:45.356 06:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:45.356 06:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:45.356 06:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:09:45.356 06:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.356 06:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:45.356 06:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.356 06:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:09:45.356 06:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.356 06:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:45.356 [2024-10-01 06:02:10.850468] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:45.356 06:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.356 06:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:45.356 06:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.356 06:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:45.356 [2024-10-01 06:02:10.865589] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:45.356 06:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.356 06:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:45.356 06:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.356 06:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:45.356 Malloc0 00:09:45.356 06:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.356 06:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:45.356 06:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.356 06:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:45.356 06:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.356 06:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:45.356 06:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.356 06:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:45.356 06:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.356 06:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:09:45.356 06:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.356 06:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:45.356 [2024-10-01 06:02:10.930340] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:09:45.356 06:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.356 06:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=76028 00:09:45.356 06:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=76030 00:09:45.356 06:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:09:45.356 06:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:09:45.356 06:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # config=() 00:09:45.356 06:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # local subsystem config 00:09:45.356 06:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=76032 00:09:45.356 06:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:09:45.356 06:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:09:45.356 06:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # config=() 00:09:45.356 06:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:09:45.356 06:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=76034 00:09:45.356 06:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:09:45.356 06:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:09:45.356 { 00:09:45.356 "params": { 00:09:45.356 "name": "Nvme$subsystem", 00:09:45.356 "trtype": "$TEST_TRANSPORT", 00:09:45.356 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:45.356 "adrfam": "ipv4", 00:09:45.356 "trsvcid": "$NVMF_PORT", 00:09:45.356 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:45.356 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:45.356 "hdgst": ${hdgst:-false}, 00:09:45.356 "ddgst": ${ddgst:-false} 00:09:45.356 }, 00:09:45.356 "method": "bdev_nvme_attach_controller" 00:09:45.356 } 00:09:45.356 EOF 00:09:45.356 )") 00:09:45.356 06:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:09:45.356 06:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # local subsystem config 00:09:45.356 06:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:09:45.356 06:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:09:45.356 { 00:09:45.356 "params": { 00:09:45.356 "name": "Nvme$subsystem", 00:09:45.356 "trtype": "$TEST_TRANSPORT", 00:09:45.356 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:45.356 "adrfam": "ipv4", 00:09:45.356 "trsvcid": "$NVMF_PORT", 00:09:45.356 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:45.356 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:45.356 "hdgst": ${hdgst:-false}, 00:09:45.356 "ddgst": ${ddgst:-false} 00:09:45.356 }, 00:09:45.356 "method": "bdev_nvme_attach_controller" 00:09:45.356 } 00:09:45.356 EOF 00:09:45.356 )") 00:09:45.356 06:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:09:45.356 06:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # cat 00:09:45.356 06:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:09:45.356 06:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:09:45.356 06:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # config=() 00:09:45.356 06:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # config=() 00:09:45.356 06:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # local subsystem config 00:09:45.356 06:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # local subsystem config 00:09:45.356 06:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:09:45.356 06:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:09:45.356 { 00:09:45.356 "params": { 00:09:45.356 "name": "Nvme$subsystem", 00:09:45.356 "trtype": "$TEST_TRANSPORT", 00:09:45.356 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:45.356 "adrfam": "ipv4", 00:09:45.356 "trsvcid": "$NVMF_PORT", 00:09:45.356 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:45.356 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:45.356 "hdgst": ${hdgst:-false}, 00:09:45.356 "ddgst": ${ddgst:-false} 00:09:45.356 }, 00:09:45.356 "method": "bdev_nvme_attach_controller" 00:09:45.356 } 00:09:45.356 EOF 00:09:45.356 )") 00:09:45.356 06:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:09:45.356 06:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # cat 00:09:45.356 06:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:09:45.356 { 00:09:45.356 "params": { 00:09:45.356 "name": "Nvme$subsystem", 00:09:45.356 "trtype": "$TEST_TRANSPORT", 00:09:45.356 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:45.356 "adrfam": "ipv4", 00:09:45.357 "trsvcid": "$NVMF_PORT", 00:09:45.357 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:45.357 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:45.357 "hdgst": ${hdgst:-false}, 00:09:45.357 "ddgst": ${ddgst:-false} 00:09:45.357 }, 00:09:45.357 "method": "bdev_nvme_attach_controller" 00:09:45.357 } 00:09:45.357 EOF 00:09:45.357 )") 00:09:45.357 06:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # cat 00:09:45.357 06:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@578 -- # cat 00:09:45.357 06:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # jq . 00:09:45.357 06:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # jq . 00:09:45.357 06:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@581 -- # IFS=, 00:09:45.357 06:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:09:45.357 "params": { 00:09:45.357 "name": "Nvme1", 00:09:45.357 "trtype": "tcp", 00:09:45.357 "traddr": "10.0.0.3", 00:09:45.357 "adrfam": "ipv4", 00:09:45.357 "trsvcid": "4420", 00:09:45.357 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:45.357 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:45.357 "hdgst": false, 00:09:45.357 "ddgst": false 00:09:45.357 }, 00:09:45.357 "method": "bdev_nvme_attach_controller" 00:09:45.357 }' 00:09:45.357 06:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@581 -- # IFS=, 00:09:45.357 06:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:09:45.357 "params": { 00:09:45.357 "name": "Nvme1", 00:09:45.357 "trtype": "tcp", 00:09:45.357 "traddr": "10.0.0.3", 00:09:45.357 "adrfam": "ipv4", 00:09:45.357 "trsvcid": "4420", 00:09:45.357 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:45.357 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:45.357 "hdgst": false, 00:09:45.357 "ddgst": false 00:09:45.357 }, 00:09:45.357 "method": "bdev_nvme_attach_controller" 00:09:45.357 }' 00:09:45.357 06:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # jq . 00:09:45.357 06:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@581 -- # IFS=, 00:09:45.357 06:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:09:45.357 "params": { 00:09:45.357 "name": "Nvme1", 00:09:45.357 "trtype": "tcp", 00:09:45.357 "traddr": "10.0.0.3", 00:09:45.357 "adrfam": "ipv4", 00:09:45.357 "trsvcid": "4420", 00:09:45.357 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:45.357 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:45.357 "hdgst": false, 00:09:45.357 "ddgst": false 00:09:45.357 }, 00:09:45.357 "method": "bdev_nvme_attach_controller" 00:09:45.357 }' 00:09:45.615 06:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@580 -- # jq . 00:09:45.615 [2024-10-01 06:02:10.996792] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:09:45.615 [2024-10-01 06:02:10.997128] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:09:45.616 [2024-10-01 06:02:10.997283] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:09:45.616 06:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@581 -- # IFS=, 00:09:45.616 06:02:10 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:09:45.616 "params": { 00:09:45.616 "name": "Nvme1", 00:09:45.616 "trtype": "tcp", 00:09:45.616 "traddr": "10.0.0.3", 00:09:45.616 "adrfam": "ipv4", 00:09:45.616 "trsvcid": "4420", 00:09:45.616 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:45.616 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:45.616 "hdgst": false, 00:09:45.616 "ddgst": false 00:09:45.616 }, 00:09:45.616 "method": "bdev_nvme_attach_controller" 00:09:45.616 }' 00:09:45.616 [2024-10-01 06:02:10.998909] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:09:45.616 [2024-10-01 06:02:11.006389] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:09:45.616 [2024-10-01 06:02:11.006646] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:09:45.616 06:02:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 76028 00:09:45.616 [2024-10-01 06:02:11.023200] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:09:45.616 [2024-10-01 06:02:11.023980] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:09:45.616 [2024-10-01 06:02:11.178081] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:45.616 [2024-10-01 06:02:11.205914] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 5 00:09:45.616 [2024-10-01 06:02:11.222157] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:45.874 [2024-10-01 06:02:11.238202] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:45.874 [2024-10-01 06:02:11.249720] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 6 00:09:45.874 [2024-10-01 06:02:11.262955] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:45.874 [2024-10-01 06:02:11.284076] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:45.874 [2024-10-01 06:02:11.291078] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:09:45.874 [2024-10-01 06:02:11.312122] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:45.874 [2024-10-01 06:02:11.331378] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:45.874 [2024-10-01 06:02:11.339596] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 7 00:09:45.874 Running I/O for 1 seconds... 00:09:45.874 [2024-10-01 06:02:11.373919] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:45.874 Running I/O for 1 seconds... 00:09:45.874 Running I/O for 1 seconds... 00:09:45.874 Running I/O for 1 seconds... 00:09:46.807 5915.00 IOPS, 23.11 MiB/s 00:09:46.808 Latency(us) 00:09:46.808 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:46.808 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:09:46.808 Nvme1n1 : 1.02 5897.21 23.04 0.00 0.00 21363.04 7685.59 35746.91 00:09:46.808 =================================================================================================================== 00:09:46.808 Total : 5897.21 23.04 0.00 0.00 21363.04 7685.59 35746.91 00:09:46.808 168080.00 IOPS, 656.56 MiB/s 00:09:46.808 Latency(us) 00:09:46.808 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:46.808 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:09:46.808 Nvme1n1 : 1.00 167733.86 655.21 0.00 0.00 759.28 417.05 2100.13 00:09:46.808 =================================================================================================================== 00:09:46.808 Total : 167733.86 655.21 0.00 0.00 759.28 417.05 2100.13 00:09:47.066 8505.00 IOPS, 33.22 MiB/s 00:09:47.066 Latency(us) 00:09:47.066 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:47.066 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:09:47.066 Nvme1n1 : 1.01 8559.20 33.43 0.00 0.00 14877.16 7387.69 26691.03 00:09:47.066 =================================================================================================================== 00:09:47.066 Total : 8559.20 33.43 0.00 0.00 14877.16 7387.69 26691.03 00:09:47.066 6001.00 IOPS, 23.44 MiB/s 00:09:47.066 Latency(us) 00:09:47.066 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:47.066 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:09:47.066 Nvme1n1 : 1.01 6140.55 23.99 0.00 0.00 20778.00 5332.25 49092.42 00:09:47.066 =================================================================================================================== 00:09:47.066 Total : 6140.55 23.99 0.00 0.00 20778.00 5332.25 49092.42 00:09:47.066 06:02:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 76030 00:09:47.066 06:02:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 76032 00:09:47.066 06:02:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 76034 00:09:47.066 06:02:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:47.066 06:02:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.066 06:02:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:47.066 06:02:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.066 06:02:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:09:47.066 06:02:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:09:47.066 06:02:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # nvmfcleanup 00:09:47.066 06:02:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:09:47.325 06:02:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:47.325 06:02:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:09:47.325 06:02:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:47.325 06:02:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:47.325 rmmod nvme_tcp 00:09:47.325 rmmod nvme_fabrics 00:09:47.325 rmmod nvme_keyring 00:09:47.325 06:02:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:47.325 06:02:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:09:47.325 06:02:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:09:47.325 06:02:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@513 -- # '[' -n 76006 ']' 00:09:47.325 06:02:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@514 -- # killprocess 76006 00:09:47.325 06:02:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@950 -- # '[' -z 76006 ']' 00:09:47.325 06:02:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # kill -0 76006 00:09:47.325 06:02:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # uname 00:09:47.325 06:02:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:47.325 06:02:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76006 00:09:47.325 killing process with pid 76006 00:09:47.325 06:02:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:47.325 06:02:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:47.325 06:02:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76006' 00:09:47.325 06:02:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@969 -- # kill 76006 00:09:47.325 06:02:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@974 -- # wait 76006 00:09:47.325 06:02:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:09:47.325 06:02:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:09:47.325 06:02:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:09:47.325 06:02:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:09:47.325 06:02:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@787 -- # iptables-save 00:09:47.325 06:02:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:09:47.325 06:02:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@787 -- # iptables-restore 00:09:47.325 06:02:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:47.325 06:02:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:09:47.325 06:02:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:09:47.325 06:02:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:09:47.583 06:02:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:09:47.583 06:02:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:09:47.584 06:02:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:09:47.584 06:02:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:09:47.584 06:02:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:09:47.584 06:02:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:09:47.584 06:02:12 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:09:47.584 06:02:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:09:47.584 06:02:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:09:47.584 06:02:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:47.584 06:02:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:47.584 06:02:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@246 -- # remove_spdk_ns 00:09:47.584 06:02:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:47.584 06:02:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:47.584 06:02:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:47.584 06:02:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@300 -- # return 0 00:09:47.584 00:09:47.584 real 0m3.313s 00:09:47.584 user 0m13.168s 00:09:47.584 sys 0m2.078s 00:09:47.584 06:02:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:47.584 06:02:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:47.584 ************************************ 00:09:47.584 END TEST nvmf_bdev_io_wait 00:09:47.584 ************************************ 00:09:47.584 06:02:13 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:09:47.584 06:02:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:47.584 06:02:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:47.584 06:02:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:47.584 ************************************ 00:09:47.584 START TEST nvmf_queue_depth 00:09:47.584 ************************************ 00:09:47.584 06:02:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:09:47.843 * Looking for test storage... 00:09:47.843 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:47.843 06:02:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:47.843 06:02:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:47.843 06:02:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1681 -- # lcov --version 00:09:47.843 06:02:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:47.843 06:02:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:47.843 06:02:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:47.843 06:02:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:47.843 06:02:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:09:47.843 06:02:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:09:47.843 06:02:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:09:47.843 06:02:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:09:47.843 06:02:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:09:47.843 06:02:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:09:47.843 06:02:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:09:47.843 06:02:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:47.843 06:02:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:09:47.843 06:02:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:09:47.843 06:02:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:47.843 06:02:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:47.843 06:02:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:09:47.843 06:02:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:09:47.843 06:02:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:47.843 06:02:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:09:47.843 06:02:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:09:47.843 06:02:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:09:47.843 06:02:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:09:47.843 06:02:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:47.843 06:02:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:09:47.843 06:02:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:09:47.843 06:02:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:47.843 06:02:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:47.843 06:02:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:09:47.843 06:02:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:47.843 06:02:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:47.843 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:47.843 --rc genhtml_branch_coverage=1 00:09:47.843 --rc genhtml_function_coverage=1 00:09:47.843 --rc genhtml_legend=1 00:09:47.843 --rc geninfo_all_blocks=1 00:09:47.843 --rc geninfo_unexecuted_blocks=1 00:09:47.843 00:09:47.843 ' 00:09:47.843 06:02:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:47.843 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:47.843 --rc genhtml_branch_coverage=1 00:09:47.843 --rc genhtml_function_coverage=1 00:09:47.843 --rc genhtml_legend=1 00:09:47.843 --rc geninfo_all_blocks=1 00:09:47.843 --rc geninfo_unexecuted_blocks=1 00:09:47.843 00:09:47.843 ' 00:09:47.843 06:02:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:47.843 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:47.843 --rc genhtml_branch_coverage=1 00:09:47.843 --rc genhtml_function_coverage=1 00:09:47.844 --rc genhtml_legend=1 00:09:47.844 --rc geninfo_all_blocks=1 00:09:47.844 --rc geninfo_unexecuted_blocks=1 00:09:47.844 00:09:47.844 ' 00:09:47.844 06:02:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:47.844 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:47.844 --rc genhtml_branch_coverage=1 00:09:47.844 --rc genhtml_function_coverage=1 00:09:47.844 --rc genhtml_legend=1 00:09:47.844 --rc geninfo_all_blocks=1 00:09:47.844 --rc geninfo_unexecuted_blocks=1 00:09:47.844 00:09:47.844 ' 00:09:47.844 06:02:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:47.844 06:02:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:09:47.844 06:02:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:47.844 06:02:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:47.844 06:02:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:47.844 06:02:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:47.844 06:02:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:47.844 06:02:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:47.844 06:02:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:47.844 06:02:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:47.844 06:02:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:47.844 06:02:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:47.844 06:02:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 00:09:47.844 06:02:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=a979a798-a221-4879-b3c4-5aaa753fde06 00:09:47.844 06:02:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:47.844 06:02:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:47.844 06:02:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:47.844 06:02:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:47.844 06:02:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:47.844 06:02:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:09:47.844 06:02:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:47.844 06:02:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:47.844 06:02:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:47.844 06:02:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:47.844 06:02:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:47.844 06:02:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:47.844 06:02:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:09:47.844 06:02:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:47.844 06:02:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:09:47.844 06:02:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:47.844 06:02:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:47.844 06:02:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:47.844 06:02:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:47.844 06:02:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:47.844 06:02:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:47.844 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:47.844 06:02:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:47.844 06:02:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:47.844 06:02:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:47.844 06:02:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:09:47.844 06:02:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:09:47.844 06:02:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:09:47.844 06:02:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:09:47.844 06:02:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:09:47.844 06:02:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:47.844 06:02:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@472 -- # prepare_net_devs 00:09:47.844 06:02:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@434 -- # local -g is_hw=no 00:09:47.844 06:02:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@436 -- # remove_spdk_ns 00:09:47.844 06:02:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:47.844 06:02:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:47.844 06:02:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:47.844 06:02:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:09:47.844 06:02:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:09:47.844 06:02:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:09:47.844 06:02:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:09:47.844 06:02:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:09:47.844 06:02:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@456 -- # nvmf_veth_init 00:09:47.844 06:02:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:47.844 06:02:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:09:47.844 06:02:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:09:47.844 06:02:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:09:47.844 06:02:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:47.844 06:02:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:09:47.844 06:02:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:47.844 06:02:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:09:47.844 06:02:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:47.844 06:02:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:09:47.844 06:02:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:47.844 06:02:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:47.844 06:02:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:47.844 06:02:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:47.844 06:02:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:47.844 06:02:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:47.844 06:02:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:09:47.844 Cannot find device "nvmf_init_br" 00:09:47.844 06:02:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@162 -- # true 00:09:47.844 06:02:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:09:47.844 Cannot find device "nvmf_init_br2" 00:09:47.844 06:02:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@163 -- # true 00:09:47.844 06:02:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:09:47.844 Cannot find device "nvmf_tgt_br" 00:09:47.844 06:02:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@164 -- # true 00:09:47.844 06:02:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:09:47.844 Cannot find device "nvmf_tgt_br2" 00:09:47.844 06:02:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@165 -- # true 00:09:47.844 06:02:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:09:48.105 Cannot find device "nvmf_init_br" 00:09:48.105 06:02:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@166 -- # true 00:09:48.105 06:02:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:09:48.105 Cannot find device "nvmf_init_br2" 00:09:48.105 06:02:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@167 -- # true 00:09:48.105 06:02:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:09:48.105 Cannot find device "nvmf_tgt_br" 00:09:48.105 06:02:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@168 -- # true 00:09:48.105 06:02:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:09:48.105 Cannot find device "nvmf_tgt_br2" 00:09:48.105 06:02:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@169 -- # true 00:09:48.105 06:02:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:09:48.105 Cannot find device "nvmf_br" 00:09:48.105 06:02:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@170 -- # true 00:09:48.105 06:02:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:09:48.105 Cannot find device "nvmf_init_if" 00:09:48.105 06:02:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@171 -- # true 00:09:48.105 06:02:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:09:48.105 Cannot find device "nvmf_init_if2" 00:09:48.105 06:02:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@172 -- # true 00:09:48.105 06:02:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:48.105 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:48.105 06:02:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@173 -- # true 00:09:48.105 06:02:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:48.105 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:48.105 06:02:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@174 -- # true 00:09:48.105 06:02:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:09:48.105 06:02:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:48.105 06:02:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:09:48.105 06:02:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:48.106 06:02:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:48.106 06:02:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:48.106 06:02:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:48.106 06:02:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:48.106 06:02:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:09:48.106 06:02:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:09:48.106 06:02:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:09:48.106 06:02:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:09:48.106 06:02:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:09:48.106 06:02:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:09:48.106 06:02:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:09:48.106 06:02:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:09:48.106 06:02:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:09:48.106 06:02:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:48.106 06:02:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:48.106 06:02:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:48.106 06:02:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:09:48.106 06:02:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:09:48.106 06:02:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:09:48.106 06:02:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:09:48.106 06:02:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:48.364 06:02:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:48.364 06:02:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:48.364 06:02:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:09:48.364 06:02:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:09:48.364 06:02:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:09:48.364 06:02:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:48.365 06:02:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:09:48.365 06:02:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:09:48.365 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:48.365 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.049 ms 00:09:48.365 00:09:48.365 --- 10.0.0.3 ping statistics --- 00:09:48.365 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:48.365 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:09:48.365 06:02:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:09:48.365 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:09:48.365 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.050 ms 00:09:48.365 00:09:48.365 --- 10.0.0.4 ping statistics --- 00:09:48.365 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:48.365 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:09:48.365 06:02:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:48.365 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:48.365 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:09:48.365 00:09:48.365 --- 10.0.0.1 ping statistics --- 00:09:48.365 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:48.365 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:09:48.365 06:02:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:09:48.365 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:48.365 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.038 ms 00:09:48.365 00:09:48.365 --- 10.0.0.2 ping statistics --- 00:09:48.365 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:48.365 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:09:48.365 06:02:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:48.365 06:02:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@457 -- # return 0 00:09:48.365 06:02:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:09:48.365 06:02:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:48.365 06:02:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:09:48.365 06:02:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:09:48.365 06:02:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:48.365 06:02:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:09:48.365 06:02:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:09:48.365 06:02:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:09:48.365 06:02:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:09:48.365 06:02:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:48.365 06:02:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:48.365 06:02:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@505 -- # nvmfpid=76294 00:09:48.365 06:02:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:48.365 06:02:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@506 -- # waitforlisten 76294 00:09:48.365 06:02:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 76294 ']' 00:09:48.365 06:02:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:48.365 06:02:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:48.365 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:48.365 06:02:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:48.365 06:02:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:48.365 06:02:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:48.365 [2024-10-01 06:02:13.840611] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:09:48.365 [2024-10-01 06:02:13.840688] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:48.365 [2024-10-01 06:02:13.976992] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:48.623 [2024-10-01 06:02:14.021478] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:48.623 [2024-10-01 06:02:14.021821] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:48.623 [2024-10-01 06:02:14.022058] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:48.623 [2024-10-01 06:02:14.022217] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:48.623 [2024-10-01 06:02:14.022260] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:48.623 [2024-10-01 06:02:14.022410] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:09:48.623 [2024-10-01 06:02:14.056930] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:48.623 06:02:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:48.623 06:02:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:09:48.623 06:02:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:09:48.623 06:02:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:48.623 06:02:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:48.623 06:02:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:48.624 06:02:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:48.624 06:02:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.624 06:02:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:48.624 [2024-10-01 06:02:14.173020] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:48.624 06:02:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.624 06:02:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:48.624 06:02:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.624 06:02:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:48.624 Malloc0 00:09:48.624 06:02:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.624 06:02:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:48.624 06:02:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.624 06:02:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:48.624 06:02:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.624 06:02:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:48.624 06:02:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.624 06:02:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:48.624 06:02:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.624 06:02:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:09:48.624 06:02:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.624 06:02:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:48.624 [2024-10-01 06:02:14.226266] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:09:48.624 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:48.624 06:02:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.624 06:02:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=76313 00:09:48.624 06:02:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:09:48.624 06:02:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:48.624 06:02:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 76313 /var/tmp/bdevperf.sock 00:09:48.624 06:02:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 76313 ']' 00:09:48.624 06:02:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:48.624 06:02:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:48.624 06:02:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:48.624 06:02:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:48.624 06:02:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:48.882 [2024-10-01 06:02:14.277441] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:09:48.882 [2024-10-01 06:02:14.277857] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76313 ] 00:09:48.882 [2024-10-01 06:02:14.413283] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:48.882 [2024-10-01 06:02:14.455965] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:48.882 [2024-10-01 06:02:14.490986] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:49.140 06:02:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:49.140 06:02:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:09:49.140 06:02:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:09:49.140 06:02:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.140 06:02:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:49.140 NVMe0n1 00:09:49.140 06:02:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.140 06:02:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:49.140 Running I/O for 10 seconds... 00:09:59.359 7188.00 IOPS, 28.08 MiB/s 7824.50 IOPS, 30.56 MiB/s 8182.00 IOPS, 31.96 MiB/s 8254.25 IOPS, 32.24 MiB/s 8404.60 IOPS, 32.83 MiB/s 8539.83 IOPS, 33.36 MiB/s 8639.14 IOPS, 33.75 MiB/s 8715.00 IOPS, 34.04 MiB/s 8757.44 IOPS, 34.21 MiB/s 8740.20 IOPS, 34.14 MiB/s 00:09:59.359 Latency(us) 00:09:59.359 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:59.359 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:09:59.359 Verification LBA range: start 0x0 length 0x4000 00:09:59.359 NVMe0n1 : 10.06 8777.65 34.29 0.00 0.00 116142.75 8162.21 83886.08 00:09:59.359 =================================================================================================================== 00:09:59.359 Total : 8777.65 34.29 0.00 0.00 116142.75 8162.21 83886.08 00:09:59.359 { 00:09:59.359 "results": [ 00:09:59.359 { 00:09:59.359 "job": "NVMe0n1", 00:09:59.359 "core_mask": "0x1", 00:09:59.359 "workload": "verify", 00:09:59.359 "status": "finished", 00:09:59.359 "verify_range": { 00:09:59.359 "start": 0, 00:09:59.359 "length": 16384 00:09:59.359 }, 00:09:59.359 "queue_depth": 1024, 00:09:59.359 "io_size": 4096, 00:09:59.359 "runtime": 10.062031, 00:09:59.359 "iops": 8777.6513509052, 00:09:59.359 "mibps": 34.28770058947344, 00:09:59.359 "io_failed": 0, 00:09:59.359 "io_timeout": 0, 00:09:59.359 "avg_latency_us": 116142.74889361224, 00:09:59.359 "min_latency_us": 8162.210909090909, 00:09:59.359 "max_latency_us": 83886.08 00:09:59.359 } 00:09:59.359 ], 00:09:59.359 "core_count": 1 00:09:59.359 } 00:09:59.359 06:02:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 76313 00:09:59.359 06:02:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 76313 ']' 00:09:59.359 06:02:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 76313 00:09:59.359 06:02:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:09:59.359 06:02:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:59.359 06:02:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76313 00:09:59.359 killing process with pid 76313 00:09:59.359 Received shutdown signal, test time was about 10.000000 seconds 00:09:59.359 00:09:59.359 Latency(us) 00:09:59.359 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:59.359 =================================================================================================================== 00:09:59.359 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:59.359 06:02:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:59.359 06:02:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:59.359 06:02:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76313' 00:09:59.359 06:02:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 76313 00:09:59.359 06:02:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 76313 00:09:59.618 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:09:59.618 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:09:59.618 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # nvmfcleanup 00:09:59.618 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:09:59.618 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:59.618 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:09:59.618 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:59.618 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:59.618 rmmod nvme_tcp 00:09:59.618 rmmod nvme_fabrics 00:09:59.618 rmmod nvme_keyring 00:09:59.618 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:59.618 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:09:59.618 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:09:59.618 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@513 -- # '[' -n 76294 ']' 00:09:59.618 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@514 -- # killprocess 76294 00:09:59.618 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 76294 ']' 00:09:59.618 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 76294 00:09:59.618 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:09:59.618 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:59.618 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76294 00:09:59.618 killing process with pid 76294 00:09:59.618 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:09:59.618 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:09:59.618 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76294' 00:09:59.618 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 76294 00:09:59.618 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 76294 00:09:59.878 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:09:59.878 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:09:59.878 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:09:59.878 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:09:59.878 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@787 -- # iptables-save 00:09:59.878 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@787 -- # iptables-restore 00:09:59.878 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:09:59.878 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:59.878 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:09:59.878 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:09:59.878 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:09:59.878 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:09:59.878 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:09:59.878 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:09:59.878 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:09:59.878 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:09:59.878 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:09:59.878 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:09:59.878 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:09:59.878 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:10:00.136 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:00.136 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:00.136 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@246 -- # remove_spdk_ns 00:10:00.136 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:00.136 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:00.136 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:00.136 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@300 -- # return 0 00:10:00.136 00:10:00.136 real 0m12.391s 00:10:00.136 user 0m21.114s 00:10:00.136 sys 0m2.172s 00:10:00.136 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:00.136 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:00.136 ************************************ 00:10:00.136 END TEST nvmf_queue_depth 00:10:00.136 ************************************ 00:10:00.136 06:02:25 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:10:00.136 06:02:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:00.136 06:02:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:00.136 06:02:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:00.136 ************************************ 00:10:00.136 START TEST nvmf_target_multipath 00:10:00.136 ************************************ 00:10:00.136 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:10:00.136 * Looking for test storage... 00:10:00.136 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:00.136 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:10:00.136 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1681 -- # lcov --version 00:10:00.136 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:10:00.396 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:10:00.396 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:00.396 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:00.396 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:00.396 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:10:00.396 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:10:00.396 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:10:00.396 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:10:00.396 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:10:00.396 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:10:00.396 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:10:00.396 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:00.396 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:10:00.396 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:10:00.396 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:00.396 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:00.396 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:10:00.396 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:10:00.396 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:00.396 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:10:00.396 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:10:00.396 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:10:00.396 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:10:00.396 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:00.396 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:10:00.396 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:10:00.396 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:00.396 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:00.396 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:10:00.396 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:00.396 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:10:00.396 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:00.396 --rc genhtml_branch_coverage=1 00:10:00.396 --rc genhtml_function_coverage=1 00:10:00.396 --rc genhtml_legend=1 00:10:00.396 --rc geninfo_all_blocks=1 00:10:00.396 --rc geninfo_unexecuted_blocks=1 00:10:00.396 00:10:00.396 ' 00:10:00.396 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:10:00.396 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:00.396 --rc genhtml_branch_coverage=1 00:10:00.396 --rc genhtml_function_coverage=1 00:10:00.396 --rc genhtml_legend=1 00:10:00.396 --rc geninfo_all_blocks=1 00:10:00.396 --rc geninfo_unexecuted_blocks=1 00:10:00.396 00:10:00.396 ' 00:10:00.396 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:10:00.396 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:00.396 --rc genhtml_branch_coverage=1 00:10:00.396 --rc genhtml_function_coverage=1 00:10:00.396 --rc genhtml_legend=1 00:10:00.396 --rc geninfo_all_blocks=1 00:10:00.396 --rc geninfo_unexecuted_blocks=1 00:10:00.396 00:10:00.396 ' 00:10:00.396 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:10:00.396 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:00.396 --rc genhtml_branch_coverage=1 00:10:00.396 --rc genhtml_function_coverage=1 00:10:00.396 --rc genhtml_legend=1 00:10:00.396 --rc geninfo_all_blocks=1 00:10:00.396 --rc geninfo_unexecuted_blocks=1 00:10:00.396 00:10:00.396 ' 00:10:00.396 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:00.396 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:10:00.396 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:00.396 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:00.396 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:00.396 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:00.396 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:00.396 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:00.396 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:00.396 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:00.396 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:00.396 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:00.396 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 00:10:00.396 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=a979a798-a221-4879-b3c4-5aaa753fde06 00:10:00.396 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:00.396 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:00.396 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:00.396 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:00.396 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:00.396 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:10:00.396 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:00.396 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:00.396 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:00.396 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:00.397 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:00.397 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:00.397 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:10:00.397 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:00.397 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:10:00.397 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:00.397 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:00.397 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:00.397 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:00.397 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:00.397 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:00.397 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:00.397 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:00.397 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:00.397 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:00.397 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:00.397 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:00.397 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:10:00.397 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:00.397 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:10:00.397 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:10:00.397 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:00.397 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@472 -- # prepare_net_devs 00:10:00.397 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@434 -- # local -g is_hw=no 00:10:00.397 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@436 -- # remove_spdk_ns 00:10:00.397 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:00.397 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:00.397 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:00.397 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:10:00.397 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:10:00.397 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:10:00.397 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:10:00.397 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:10:00.397 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@456 -- # nvmf_veth_init 00:10:00.397 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:00.397 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:10:00.397 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:10:00.397 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:10:00.397 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:00.397 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:10:00.397 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:00.397 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:10:00.397 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:00.397 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:10:00.397 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:00.397 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:00.397 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:00.397 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:00.397 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:00.397 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:00.397 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:10:00.397 Cannot find device "nvmf_init_br" 00:10:00.397 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@162 -- # true 00:10:00.397 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:10:00.397 Cannot find device "nvmf_init_br2" 00:10:00.397 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@163 -- # true 00:10:00.397 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:10:00.397 Cannot find device "nvmf_tgt_br" 00:10:00.397 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@164 -- # true 00:10:00.397 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:10:00.397 Cannot find device "nvmf_tgt_br2" 00:10:00.397 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@165 -- # true 00:10:00.397 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:10:00.397 Cannot find device "nvmf_init_br" 00:10:00.397 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@166 -- # true 00:10:00.397 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:10:00.397 Cannot find device "nvmf_init_br2" 00:10:00.397 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@167 -- # true 00:10:00.397 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:10:00.397 Cannot find device "nvmf_tgt_br" 00:10:00.397 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@168 -- # true 00:10:00.397 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:10:00.397 Cannot find device "nvmf_tgt_br2" 00:10:00.397 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@169 -- # true 00:10:00.397 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:10:00.397 Cannot find device "nvmf_br" 00:10:00.397 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@170 -- # true 00:10:00.397 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:10:00.397 Cannot find device "nvmf_init_if" 00:10:00.397 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@171 -- # true 00:10:00.397 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:10:00.397 Cannot find device "nvmf_init_if2" 00:10:00.397 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@172 -- # true 00:10:00.397 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:00.397 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:00.397 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@173 -- # true 00:10:00.397 06:02:25 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:00.397 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:00.397 06:02:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@174 -- # true 00:10:00.397 06:02:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:10:00.655 06:02:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:00.655 06:02:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:10:00.655 06:02:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:00.655 06:02:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:00.655 06:02:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:00.655 06:02:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:00.655 06:02:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:00.655 06:02:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:10:00.655 06:02:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:10:00.655 06:02:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:10:00.655 06:02:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:10:00.655 06:02:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:10:00.655 06:02:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:10:00.655 06:02:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:10:00.655 06:02:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:10:00.655 06:02:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:10:00.655 06:02:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:00.655 06:02:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:00.655 06:02:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:00.655 06:02:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:10:00.655 06:02:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:10:00.655 06:02:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:10:00.655 06:02:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:10:00.655 06:02:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:00.655 06:02:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:00.655 06:02:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:00.655 06:02:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:10:00.655 06:02:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:10:00.655 06:02:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:10:00.655 06:02:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:00.655 06:02:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:10:00.655 06:02:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:10:00.914 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:00.914 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.095 ms 00:10:00.914 00:10:00.914 --- 10.0.0.3 ping statistics --- 00:10:00.914 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:00.914 rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms 00:10:00.914 06:02:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:10:00.914 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:10:00.914 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.042 ms 00:10:00.914 00:10:00.914 --- 10.0.0.4 ping statistics --- 00:10:00.914 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:00.914 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:10:00.914 06:02:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:00.914 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:00.914 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:10:00.914 00:10:00.914 --- 10.0.0.1 ping statistics --- 00:10:00.914 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:00.914 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:10:00.914 06:02:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:10:00.914 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:00.914 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.068 ms 00:10:00.914 00:10:00.914 --- 10.0.0.2 ping statistics --- 00:10:00.914 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:00.914 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:10:00.914 06:02:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:00.914 06:02:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@457 -- # return 0 00:10:00.914 06:02:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:10:00.914 06:02:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:00.914 06:02:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:10:00.914 06:02:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:10:00.914 06:02:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:00.914 06:02:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:10:00.914 06:02:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:10:00.914 06:02:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z 10.0.0.4 ']' 00:10:00.914 06:02:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 00:10:00.914 06:02:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 00:10:00.914 06:02:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:10:00.914 06:02:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:00.914 06:02:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:10:00.914 06:02:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@505 -- # nvmfpid=76680 00:10:00.914 06:02:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:00.914 06:02:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@506 -- # waitforlisten 76680 00:10:00.914 06:02:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@831 -- # '[' -z 76680 ']' 00:10:00.914 06:02:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:00.914 06:02:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:00.914 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:00.914 06:02:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:00.914 06:02:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:00.914 06:02:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:10:00.914 [2024-10-01 06:02:26.365591] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:10:00.914 [2024-10-01 06:02:26.365679] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:00.914 [2024-10-01 06:02:26.498436] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:01.172 [2024-10-01 06:02:26.533724] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:01.172 [2024-10-01 06:02:26.534030] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:01.172 [2024-10-01 06:02:26.534187] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:01.172 [2024-10-01 06:02:26.534303] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:01.172 [2024-10-01 06:02:26.534359] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:01.172 [2024-10-01 06:02:26.534561] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:10:01.172 [2024-10-01 06:02:26.534794] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:01.172 [2024-10-01 06:02:26.534697] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:10:01.172 [2024-10-01 06:02:26.535462] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:10:01.172 [2024-10-01 06:02:26.563677] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:01.172 06:02:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:01.172 06:02:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@864 -- # return 0 00:10:01.172 06:02:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:10:01.172 06:02:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:01.172 06:02:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:10:01.172 06:02:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:01.172 06:02:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:01.431 [2024-10-01 06:02:26.954976] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:01.431 06:02:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:10:01.689 Malloc0 00:10:01.689 06:02:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:10:01.947 06:02:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:02.205 06:02:27 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:10:02.463 [2024-10-01 06:02:28.010161] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:10:02.463 06:02:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 00:10:02.721 [2024-10-01 06:02:28.254417] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.4 port 4420 *** 00:10:02.721 06:02:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 --hostid=a979a798-a221-4879-b3c4-5aaa753fde06 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 00:10:02.979 06:02:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 --hostid=a979a798-a221-4879-b3c4-5aaa753fde06 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.4 -s 4420 -g -G 00:10:02.979 06:02:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 00:10:02.979 06:02:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1198 -- # local i=0 00:10:02.979 06:02:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:02.979 06:02:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:10:02.979 06:02:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1205 -- # sleep 2 00:10:05.508 06:02:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:05.508 06:02:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:05.508 06:02:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:05.508 06:02:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:10:05.508 06:02:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:05.508 06:02:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1208 -- # return 0 00:10:05.508 06:02:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:10:05.508 06:02:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:10:05.508 06:02:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:10:05.508 06:02:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:10:05.508 06:02:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:10:05.508 06:02:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@38 -- # echo nvme-subsys0 00:10:05.508 06:02:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@38 -- # return 0 00:10:05.508 06:02:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 00:10:05.508 06:02:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:10:05.508 06:02:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 00:10:05.508 06:02:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@76 -- # (( 2 == 2 )) 00:10:05.508 06:02:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@78 -- # p0=nvme0c0n1 00:10:05.508 06:02:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@79 -- # p1=nvme0c1n1 00:10:05.508 06:02:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 00:10:05.508 06:02:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:10:05.508 06:02:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:05.508 06:02:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:10:05.509 06:02:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:10:05.509 06:02:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:10:05.509 06:02:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 00:10:05.509 06:02:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:10:05.509 06:02:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:05.509 06:02:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:10:05.509 06:02:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:05.509 06:02:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:10:05.509 06:02:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@85 -- # echo numa 00:10:05.509 06:02:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@88 -- # fio_pid=76762 00:10:05.509 06:02:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@90 -- # sleep 1 00:10:05.509 06:02:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:10:05.509 [global] 00:10:05.509 thread=1 00:10:05.509 invalidate=1 00:10:05.509 rw=randrw 00:10:05.509 time_based=1 00:10:05.509 runtime=6 00:10:05.509 ioengine=libaio 00:10:05.509 direct=1 00:10:05.509 bs=4096 00:10:05.509 iodepth=128 00:10:05.509 norandommap=0 00:10:05.509 numjobs=1 00:10:05.509 00:10:05.509 verify_dump=1 00:10:05.509 verify_backlog=512 00:10:05.509 verify_state_save=0 00:10:05.509 do_verify=1 00:10:05.509 verify=crc32c-intel 00:10:05.509 [job0] 00:10:05.509 filename=/dev/nvme0n1 00:10:05.509 Could not set queue depth (nvme0n1) 00:10:05.509 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:05.509 fio-3.35 00:10:05.509 Starting 1 thread 00:10:06.075 06:02:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:10:06.333 06:02:31 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n non_optimized 00:10:06.900 06:02:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 00:10:06.900 06:02:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:10:06.900 06:02:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:06.900 06:02:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:10:06.900 06:02:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:10:06.900 06:02:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:10:06.900 06:02:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 00:10:06.900 06:02:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:10:06.900 06:02:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:06.900 06:02:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:10:06.900 06:02:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:06.900 06:02:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:10:06.900 06:02:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:10:07.158 06:02:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n inaccessible 00:10:07.416 06:02:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 00:10:07.416 06:02:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:10:07.416 06:02:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:07.416 06:02:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:10:07.416 06:02:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:10:07.416 06:02:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:10:07.416 06:02:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 00:10:07.416 06:02:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:10:07.416 06:02:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:07.416 06:02:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:10:07.416 06:02:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:07.416 06:02:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:10:07.416 06:02:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@104 -- # wait 76762 00:10:11.603 00:10:11.603 job0: (groupid=0, jobs=1): err= 0: pid=76789: Tue Oct 1 06:02:36 2024 00:10:11.603 read: IOPS=10.4k, BW=40.7MiB/s (42.7MB/s)(244MiB/6002msec) 00:10:11.603 slat (usec): min=2, max=5933, avg=56.12, stdev=217.33 00:10:11.603 clat (usec): min=840, max=16750, avg=8390.13, stdev=1499.13 00:10:11.603 lat (usec): min=875, max=16761, avg=8446.26, stdev=1503.31 00:10:11.603 clat percentiles (usec): 00:10:11.603 | 1.00th=[ 4228], 5.00th=[ 6194], 10.00th=[ 7046], 20.00th=[ 7570], 00:10:11.603 | 30.00th=[ 7832], 40.00th=[ 8094], 50.00th=[ 8291], 60.00th=[ 8455], 00:10:11.603 | 70.00th=[ 8717], 80.00th=[ 8979], 90.00th=[ 9896], 95.00th=[11731], 00:10:11.603 | 99.00th=[13042], 99.50th=[13435], 99.90th=[13960], 99.95th=[14353], 00:10:11.603 | 99.99th=[16188] 00:10:11.603 bw ( KiB/s): min= 7776, max=28128, per=51.09%, avg=21298.18, stdev=6591.58, samples=11 00:10:11.603 iops : min= 1944, max= 7032, avg=5324.55, stdev=1647.89, samples=11 00:10:11.603 write: IOPS=6148, BW=24.0MiB/s (25.2MB/s)(126MiB/5263msec); 0 zone resets 00:10:11.603 slat (usec): min=3, max=2684, avg=65.86, stdev=161.19 00:10:11.603 clat (usec): min=686, max=16964, avg=7307.37, stdev=1438.77 00:10:11.603 lat (usec): min=717, max=16993, avg=7373.24, stdev=1443.85 00:10:11.603 clat percentiles (usec): 00:10:11.603 | 1.00th=[ 3195], 5.00th=[ 4228], 10.00th=[ 5342], 20.00th=[ 6652], 00:10:11.603 | 30.00th=[ 6980], 40.00th=[ 7242], 50.00th=[ 7439], 60.00th=[ 7635], 00:10:11.603 | 70.00th=[ 7898], 80.00th=[ 8160], 90.00th=[ 8586], 95.00th=[ 8979], 00:10:11.604 | 99.00th=[11469], 99.50th=[11994], 99.90th=[13435], 99.95th=[15533], 00:10:11.604 | 99.99th=[16450] 00:10:11.604 bw ( KiB/s): min= 8056, max=27880, per=86.96%, avg=21389.82, stdev=6427.21, samples=11 00:10:11.604 iops : min= 2014, max= 6970, avg=5347.45, stdev=1606.80, samples=11 00:10:11.604 lat (usec) : 750=0.01%, 1000=0.01% 00:10:11.604 lat (msec) : 2=0.04%, 4=1.83%, 10=91.00%, 20=7.12% 00:10:11.604 cpu : usr=5.40%, sys=21.98%, ctx=5664, majf=0, minf=127 00:10:11.604 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:10:11.604 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:11.604 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:11.604 issued rwts: total=62552,32362,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:11.604 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:11.604 00:10:11.604 Run status group 0 (all jobs): 00:10:11.604 READ: bw=40.7MiB/s (42.7MB/s), 40.7MiB/s-40.7MiB/s (42.7MB/s-42.7MB/s), io=244MiB (256MB), run=6002-6002msec 00:10:11.604 WRITE: bw=24.0MiB/s (25.2MB/s), 24.0MiB/s-24.0MiB/s (25.2MB/s-25.2MB/s), io=126MiB (133MB), run=5263-5263msec 00:10:11.604 00:10:11.604 Disk stats (read/write): 00:10:11.604 nvme0n1: ios=61606/31747, merge=0/0, ticks=494983/217094, in_queue=712077, util=98.63% 00:10:11.604 06:02:36 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:10:11.604 06:02:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n optimized 00:10:12.171 06:02:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 00:10:12.171 06:02:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:10:12.171 06:02:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:12.171 06:02:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:10:12.171 06:02:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:10:12.171 06:02:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:10:12.171 06:02:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 00:10:12.171 06:02:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:10:12.171 06:02:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:12.171 06:02:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:10:12.171 06:02:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:12.171 06:02:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:10:12.171 06:02:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@113 -- # echo round-robin 00:10:12.171 06:02:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:10:12.171 06:02:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@116 -- # fio_pid=76868 00:10:12.171 06:02:37 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@118 -- # sleep 1 00:10:12.171 [global] 00:10:12.171 thread=1 00:10:12.171 invalidate=1 00:10:12.171 rw=randrw 00:10:12.171 time_based=1 00:10:12.171 runtime=6 00:10:12.171 ioengine=libaio 00:10:12.171 direct=1 00:10:12.171 bs=4096 00:10:12.171 iodepth=128 00:10:12.171 norandommap=0 00:10:12.171 numjobs=1 00:10:12.171 00:10:12.171 verify_dump=1 00:10:12.171 verify_backlog=512 00:10:12.171 verify_state_save=0 00:10:12.171 do_verify=1 00:10:12.171 verify=crc32c-intel 00:10:12.171 [job0] 00:10:12.171 filename=/dev/nvme0n1 00:10:12.171 Could not set queue depth (nvme0n1) 00:10:12.171 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:12.171 fio-3.35 00:10:12.171 Starting 1 thread 00:10:13.106 06:02:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:10:13.368 06:02:38 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n non_optimized 00:10:13.632 06:02:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 00:10:13.632 06:02:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:10:13.632 06:02:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:13.632 06:02:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:10:13.632 06:02:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:10:13.632 06:02:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:10:13.632 06:02:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 00:10:13.632 06:02:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:10:13.632 06:02:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:13.632 06:02:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:10:13.632 06:02:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:13.633 06:02:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:10:13.633 06:02:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:10:13.891 06:02:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n inaccessible 00:10:14.150 06:02:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 00:10:14.150 06:02:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:10:14.150 06:02:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:14.150 06:02:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:10:14.150 06:02:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:10:14.150 06:02:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:10:14.150 06:02:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 00:10:14.150 06:02:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:10:14.150 06:02:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:10:14.150 06:02:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:10:14.150 06:02:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:14.150 06:02:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:10:14.150 06:02:39 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@132 -- # wait 76868 00:10:18.338 00:10:18.338 job0: (groupid=0, jobs=1): err= 0: pid=76889: Tue Oct 1 06:02:43 2024 00:10:18.338 read: IOPS=11.2k, BW=43.8MiB/s (46.0MB/s)(263MiB/6004msec) 00:10:18.338 slat (usec): min=2, max=15718, avg=43.89, stdev=206.85 00:10:18.338 clat (usec): min=309, max=28616, avg=7818.58, stdev=2333.83 00:10:18.338 lat (usec): min=318, max=28675, avg=7862.46, stdev=2349.09 00:10:18.338 clat percentiles (usec): 00:10:18.338 | 1.00th=[ 971], 5.00th=[ 3556], 10.00th=[ 4686], 20.00th=[ 6063], 00:10:18.338 | 30.00th=[ 7308], 40.00th=[ 7898], 50.00th=[ 8225], 60.00th=[ 8455], 00:10:18.338 | 70.00th=[ 8717], 80.00th=[ 9110], 90.00th=[ 9765], 95.00th=[11994], 00:10:18.338 | 99.00th=[13566], 99.50th=[13960], 99.90th=[16909], 99.95th=[23200], 00:10:18.338 | 99.99th=[27395] 00:10:18.338 bw ( KiB/s): min=13440, max=36560, per=53.33%, avg=23934.55, stdev=7551.83, samples=11 00:10:18.338 iops : min= 3360, max= 9140, avg=5983.64, stdev=1887.96, samples=11 00:10:18.338 write: IOPS=6631, BW=25.9MiB/s (27.2MB/s)(140MiB/5402msec); 0 zone resets 00:10:18.338 slat (usec): min=3, max=1830, avg=52.78, stdev=139.16 00:10:18.338 clat (usec): min=1226, max=30625, avg=6581.10, stdev=2001.40 00:10:18.338 lat (usec): min=1246, max=30643, avg=6633.87, stdev=2016.90 00:10:18.338 clat percentiles (usec): 00:10:18.338 | 1.00th=[ 2376], 5.00th=[ 3228], 10.00th=[ 3752], 20.00th=[ 4490], 00:10:18.338 | 30.00th=[ 5342], 40.00th=[ 6652], 50.00th=[ 7242], 60.00th=[ 7570], 00:10:18.338 | 70.00th=[ 7832], 80.00th=[ 8094], 90.00th=[ 8455], 95.00th=[ 8717], 00:10:18.338 | 99.00th=[11207], 99.50th=[11994], 99.90th=[20055], 99.95th=[26084], 00:10:18.338 | 99.99th=[27395] 00:10:18.338 bw ( KiB/s): min=14008, max=36864, per=90.30%, avg=23955.64, stdev=7364.06, samples=11 00:10:18.338 iops : min= 3502, max= 9216, avg=5988.91, stdev=1841.02, samples=11 00:10:18.338 lat (usec) : 500=0.06%, 750=0.25%, 1000=0.38% 00:10:18.338 lat (msec) : 2=0.53%, 4=7.88%, 10=84.96%, 20=5.87%, 50=0.07% 00:10:18.338 cpu : usr=5.80%, sys=21.27%, ctx=5993, majf=0, minf=90 00:10:18.338 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:10:18.338 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:18.338 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:18.338 issued rwts: total=67361,35826,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:18.338 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:18.338 00:10:18.338 Run status group 0 (all jobs): 00:10:18.338 READ: bw=43.8MiB/s (46.0MB/s), 43.8MiB/s-43.8MiB/s (46.0MB/s-46.0MB/s), io=263MiB (276MB), run=6004-6004msec 00:10:18.338 WRITE: bw=25.9MiB/s (27.2MB/s), 25.9MiB/s-25.9MiB/s (27.2MB/s-27.2MB/s), io=140MiB (147MB), run=5402-5402msec 00:10:18.338 00:10:18.338 Disk stats (read/write): 00:10:18.338 nvme0n1: ios=66655/35153, merge=0/0, ticks=500863/217016, in_queue=717879, util=98.53% 00:10:18.338 06:02:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:18.338 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:10:18.338 06:02:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:18.338 06:02:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1219 -- # local i=0 00:10:18.338 06:02:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:18.338 06:02:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:18.338 06:02:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:18.338 06:02:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:18.338 06:02:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1231 -- # return 0 00:10:18.338 06:02:43 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:18.595 06:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 00:10:18.595 06:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 00:10:18.595 06:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:10:18.595 06:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@144 -- # nvmftestfini 00:10:18.595 06:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@512 -- # nvmfcleanup 00:10:18.595 06:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:10:18.853 06:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:18.853 06:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:10:18.853 06:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:18.853 06:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:18.853 rmmod nvme_tcp 00:10:18.853 rmmod nvme_fabrics 00:10:18.853 rmmod nvme_keyring 00:10:18.853 06:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:18.853 06:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:10:18.853 06:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:10:18.853 06:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@513 -- # '[' -n 76680 ']' 00:10:18.853 06:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@514 -- # killprocess 76680 00:10:18.853 06:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@950 -- # '[' -z 76680 ']' 00:10:18.853 06:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@954 -- # kill -0 76680 00:10:18.853 06:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@955 -- # uname 00:10:18.853 06:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:18.853 06:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76680 00:10:18.853 killing process with pid 76680 00:10:18.853 06:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:18.853 06:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:18.853 06:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76680' 00:10:18.853 06:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@969 -- # kill 76680 00:10:18.853 06:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@974 -- # wait 76680 00:10:19.111 06:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:10:19.112 06:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:10:19.112 06:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:10:19.112 06:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:10:19.112 06:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:10:19.112 06:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@787 -- # iptables-save 00:10:19.112 06:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@787 -- # iptables-restore 00:10:19.112 06:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:19.112 06:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:10:19.112 06:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:10:19.112 06:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:10:19.112 06:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:10:19.112 06:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:10:19.112 06:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:10:19.112 06:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:10:19.112 06:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:10:19.112 06:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:10:19.112 06:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:10:19.112 06:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:10:19.112 06:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:10:19.112 06:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:19.112 06:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:19.112 06:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@246 -- # remove_spdk_ns 00:10:19.112 06:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:19.112 06:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:19.112 06:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:19.370 06:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@300 -- # return 0 00:10:19.370 ************************************ 00:10:19.370 END TEST nvmf_target_multipath 00:10:19.370 ************************************ 00:10:19.370 00:10:19.370 real 0m19.109s 00:10:19.370 user 1m10.035s 00:10:19.370 sys 0m10.526s 00:10:19.370 06:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:19.370 06:02:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:10:19.370 06:02:44 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:10:19.370 06:02:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:19.370 06:02:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:19.370 06:02:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:19.370 ************************************ 00:10:19.370 START TEST nvmf_zcopy 00:10:19.370 ************************************ 00:10:19.370 06:02:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:10:19.370 * Looking for test storage... 00:10:19.370 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:19.370 06:02:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:10:19.370 06:02:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1681 -- # lcov --version 00:10:19.370 06:02:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:10:19.370 06:02:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:10:19.370 06:02:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:19.370 06:02:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:19.370 06:02:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:19.370 06:02:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:10:19.370 06:02:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:10:19.370 06:02:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:10:19.370 06:02:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:10:19.370 06:02:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:10:19.370 06:02:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:10:19.370 06:02:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:10:19.370 06:02:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:19.370 06:02:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:10:19.370 06:02:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:10:19.370 06:02:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:19.370 06:02:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:19.370 06:02:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:10:19.370 06:02:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:10:19.370 06:02:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:19.370 06:02:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:10:19.370 06:02:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:10:19.370 06:02:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:10:19.370 06:02:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:10:19.370 06:02:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:19.370 06:02:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:10:19.370 06:02:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:10:19.370 06:02:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:19.370 06:02:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:19.370 06:02:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:10:19.370 06:02:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:19.370 06:02:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:10:19.370 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:19.370 --rc genhtml_branch_coverage=1 00:10:19.370 --rc genhtml_function_coverage=1 00:10:19.370 --rc genhtml_legend=1 00:10:19.370 --rc geninfo_all_blocks=1 00:10:19.370 --rc geninfo_unexecuted_blocks=1 00:10:19.370 00:10:19.370 ' 00:10:19.370 06:02:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:10:19.370 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:19.370 --rc genhtml_branch_coverage=1 00:10:19.370 --rc genhtml_function_coverage=1 00:10:19.370 --rc genhtml_legend=1 00:10:19.370 --rc geninfo_all_blocks=1 00:10:19.370 --rc geninfo_unexecuted_blocks=1 00:10:19.370 00:10:19.370 ' 00:10:19.370 06:02:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:10:19.370 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:19.370 --rc genhtml_branch_coverage=1 00:10:19.370 --rc genhtml_function_coverage=1 00:10:19.370 --rc genhtml_legend=1 00:10:19.370 --rc geninfo_all_blocks=1 00:10:19.370 --rc geninfo_unexecuted_blocks=1 00:10:19.370 00:10:19.370 ' 00:10:19.370 06:02:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:10:19.370 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:19.370 --rc genhtml_branch_coverage=1 00:10:19.370 --rc genhtml_function_coverage=1 00:10:19.370 --rc genhtml_legend=1 00:10:19.370 --rc geninfo_all_blocks=1 00:10:19.370 --rc geninfo_unexecuted_blocks=1 00:10:19.370 00:10:19.370 ' 00:10:19.370 06:02:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:19.370 06:02:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:10:19.370 06:02:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:19.370 06:02:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:19.370 06:02:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:19.370 06:02:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:19.370 06:02:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:19.370 06:02:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:19.370 06:02:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:19.370 06:02:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:19.370 06:02:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:19.370 06:02:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:19.629 06:02:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 00:10:19.629 06:02:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=a979a798-a221-4879-b3c4-5aaa753fde06 00:10:19.629 06:02:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:19.629 06:02:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:19.629 06:02:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:19.629 06:02:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:19.629 06:02:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:19.629 06:02:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:10:19.629 06:02:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:19.629 06:02:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:19.629 06:02:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:19.629 06:02:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:19.629 06:02:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:19.629 06:02:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:19.629 06:02:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:10:19.629 06:02:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:19.629 06:02:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:10:19.629 06:02:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:19.629 06:02:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:19.629 06:02:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:19.629 06:02:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:19.629 06:02:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:19.629 06:02:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:19.629 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:19.629 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:19.629 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:19.629 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:19.629 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:10:19.629 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:10:19.629 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:19.629 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@472 -- # prepare_net_devs 00:10:19.629 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@434 -- # local -g is_hw=no 00:10:19.629 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@436 -- # remove_spdk_ns 00:10:19.629 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:19.629 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:19.629 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:19.629 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:10:19.629 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:10:19.629 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:10:19.629 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:10:19.629 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:10:19.629 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@456 -- # nvmf_veth_init 00:10:19.629 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:19.629 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:10:19.629 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:10:19.629 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:10:19.629 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:19.629 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:10:19.629 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:19.629 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:10:19.629 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:19.629 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:10:19.629 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:19.629 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:19.630 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:19.630 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:19.630 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:19.630 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:19.630 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:10:19.630 Cannot find device "nvmf_init_br" 00:10:19.630 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@162 -- # true 00:10:19.630 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:10:19.630 Cannot find device "nvmf_init_br2" 00:10:19.630 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@163 -- # true 00:10:19.630 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:10:19.630 Cannot find device "nvmf_tgt_br" 00:10:19.630 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@164 -- # true 00:10:19.630 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:10:19.630 Cannot find device "nvmf_tgt_br2" 00:10:19.630 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@165 -- # true 00:10:19.630 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:10:19.630 Cannot find device "nvmf_init_br" 00:10:19.630 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@166 -- # true 00:10:19.630 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:10:19.630 Cannot find device "nvmf_init_br2" 00:10:19.630 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@167 -- # true 00:10:19.630 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:10:19.630 Cannot find device "nvmf_tgt_br" 00:10:19.630 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@168 -- # true 00:10:19.630 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:10:19.630 Cannot find device "nvmf_tgt_br2" 00:10:19.630 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@169 -- # true 00:10:19.630 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:10:19.630 Cannot find device "nvmf_br" 00:10:19.630 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@170 -- # true 00:10:19.630 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:10:19.630 Cannot find device "nvmf_init_if" 00:10:19.630 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@171 -- # true 00:10:19.630 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:10:19.630 Cannot find device "nvmf_init_if2" 00:10:19.630 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@172 -- # true 00:10:19.630 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:19.630 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:19.630 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@173 -- # true 00:10:19.630 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:19.630 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:19.630 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@174 -- # true 00:10:19.630 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:10:19.630 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:19.630 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:10:19.630 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:19.630 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:19.630 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:19.630 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:19.630 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:19.630 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:10:19.630 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:10:19.630 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:10:19.630 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:10:19.630 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:10:19.630 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:10:19.630 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:10:19.630 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:10:19.888 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:10:19.888 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:19.888 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:19.888 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:19.888 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:10:19.888 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:10:19.888 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:10:19.888 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:10:19.888 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:19.888 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:19.888 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:19.888 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:10:19.888 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:10:19.888 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:10:19.888 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:19.888 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:10:19.888 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:10:19.888 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:19.889 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.097 ms 00:10:19.889 00:10:19.889 --- 10.0.0.3 ping statistics --- 00:10:19.889 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:19.889 rtt min/avg/max/mdev = 0.097/0.097/0.097/0.000 ms 00:10:19.889 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:10:19.889 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:10:19.889 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.072 ms 00:10:19.889 00:10:19.889 --- 10.0.0.4 ping statistics --- 00:10:19.889 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:19.889 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:10:19.889 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:19.889 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:19.889 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:10:19.889 00:10:19.889 --- 10.0.0.1 ping statistics --- 00:10:19.889 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:19.889 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:10:19.889 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:10:19.889 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:19.889 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.099 ms 00:10:19.889 00:10:19.889 --- 10.0.0.2 ping statistics --- 00:10:19.889 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:19.889 rtt min/avg/max/mdev = 0.099/0.099/0.099/0.000 ms 00:10:19.889 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:19.889 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@457 -- # return 0 00:10:19.889 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:10:19.889 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:19.889 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:10:19.889 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:10:19.889 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:19.889 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:10:19.889 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:10:19.889 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:10:19.889 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:10:19.889 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:19.889 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:19.889 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@505 -- # nvmfpid=77193 00:10:19.889 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:10:19.889 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@506 -- # waitforlisten 77193 00:10:19.889 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@831 -- # '[' -z 77193 ']' 00:10:19.889 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:19.889 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:19.889 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:19.889 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:19.889 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:19.889 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:19.889 [2024-10-01 06:02:45.448691] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:10:19.889 [2024-10-01 06:02:45.448790] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:20.147 [2024-10-01 06:02:45.584037] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:20.148 [2024-10-01 06:02:45.624813] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:20.148 [2024-10-01 06:02:45.624885] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:20.148 [2024-10-01 06:02:45.624921] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:20.148 [2024-10-01 06:02:45.624933] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:20.148 [2024-10-01 06:02:45.624942] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:20.148 [2024-10-01 06:02:45.624973] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:10:20.148 [2024-10-01 06:02:45.658360] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:20.148 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:20.148 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # return 0 00:10:20.148 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:10:20.148 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:20.148 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:20.148 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:20.148 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:10:20.148 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:10:20.148 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.148 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:20.148 [2024-10-01 06:02:45.756521] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:20.148 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.148 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:20.148 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.148 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:20.406 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.406 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:10:20.406 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.406 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:20.406 [2024-10-01 06:02:45.772645] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:10:20.406 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.406 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:10:20.406 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.406 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:20.406 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.406 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:10:20.406 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.406 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:20.406 malloc0 00:10:20.406 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.406 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:10:20.406 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.406 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:20.406 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:20.406 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:10:20.406 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:10:20.406 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@556 -- # config=() 00:10:20.406 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@556 -- # local subsystem config 00:10:20.406 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:10:20.406 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:10:20.406 { 00:10:20.406 "params": { 00:10:20.406 "name": "Nvme$subsystem", 00:10:20.406 "trtype": "$TEST_TRANSPORT", 00:10:20.406 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:20.406 "adrfam": "ipv4", 00:10:20.406 "trsvcid": "$NVMF_PORT", 00:10:20.406 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:20.406 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:20.406 "hdgst": ${hdgst:-false}, 00:10:20.406 "ddgst": ${ddgst:-false} 00:10:20.406 }, 00:10:20.406 "method": "bdev_nvme_attach_controller" 00:10:20.406 } 00:10:20.406 EOF 00:10:20.406 )") 00:10:20.406 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@578 -- # cat 00:10:20.406 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@580 -- # jq . 00:10:20.406 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@581 -- # IFS=, 00:10:20.406 06:02:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:10:20.406 "params": { 00:10:20.406 "name": "Nvme1", 00:10:20.406 "trtype": "tcp", 00:10:20.406 "traddr": "10.0.0.3", 00:10:20.406 "adrfam": "ipv4", 00:10:20.406 "trsvcid": "4420", 00:10:20.406 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:20.406 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:20.406 "hdgst": false, 00:10:20.406 "ddgst": false 00:10:20.406 }, 00:10:20.406 "method": "bdev_nvme_attach_controller" 00:10:20.406 }' 00:10:20.406 [2024-10-01 06:02:45.870083] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:10:20.406 [2024-10-01 06:02:45.870178] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77216 ] 00:10:20.406 [2024-10-01 06:02:46.010767] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:20.665 [2024-10-01 06:02:46.054280] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:20.665 [2024-10-01 06:02:46.096860] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:20.665 Running I/O for 10 seconds... 00:10:30.909 5870.00 IOPS, 45.86 MiB/s 6136.00 IOPS, 47.94 MiB/s 6267.67 IOPS, 48.97 MiB/s 6331.50 IOPS, 49.46 MiB/s 6340.20 IOPS, 49.53 MiB/s 6342.00 IOPS, 49.55 MiB/s 6313.71 IOPS, 49.33 MiB/s 6320.88 IOPS, 49.38 MiB/s 6348.67 IOPS, 49.60 MiB/s 6343.60 IOPS, 49.56 MiB/s 00:10:30.909 Latency(us) 00:10:30.909 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:30.909 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:10:30.909 Verification LBA range: start 0x0 length 0x1000 00:10:30.909 Nvme1n1 : 10.01 6344.45 49.57 0.00 0.00 20109.84 763.35 35508.60 00:10:30.909 =================================================================================================================== 00:10:30.909 Total : 6344.45 49.57 0.00 0.00 20109.84 763.35 35508.60 00:10:30.909 06:02:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=77340 00:10:30.909 06:02:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:10:30.909 06:02:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:10:30.909 06:02:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:30.909 06:02:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:10:30.909 06:02:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@556 -- # config=() 00:10:30.909 06:02:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@556 -- # local subsystem config 00:10:30.909 06:02:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:10:30.909 06:02:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:10:30.909 { 00:10:30.909 "params": { 00:10:30.909 "name": "Nvme$subsystem", 00:10:30.909 "trtype": "$TEST_TRANSPORT", 00:10:30.909 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:30.909 "adrfam": "ipv4", 00:10:30.909 "trsvcid": "$NVMF_PORT", 00:10:30.909 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:30.909 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:30.909 "hdgst": ${hdgst:-false}, 00:10:30.909 "ddgst": ${ddgst:-false} 00:10:30.909 }, 00:10:30.909 "method": "bdev_nvme_attach_controller" 00:10:30.909 } 00:10:30.909 EOF 00:10:30.909 )") 00:10:30.909 06:02:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@578 -- # cat 00:10:30.909 [2024-10-01 06:02:56.369318] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.909 [2024-10-01 06:02:56.369362] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.909 06:02:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@580 -- # jq . 00:10:30.909 06:02:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@581 -- # IFS=, 00:10:30.909 06:02:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:10:30.909 "params": { 00:10:30.909 "name": "Nvme1", 00:10:30.909 "trtype": "tcp", 00:10:30.909 "traddr": "10.0.0.3", 00:10:30.909 "adrfam": "ipv4", 00:10:30.909 "trsvcid": "4420", 00:10:30.909 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:30.909 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:30.909 "hdgst": false, 00:10:30.909 "ddgst": false 00:10:30.909 }, 00:10:30.909 "method": "bdev_nvme_attach_controller" 00:10:30.909 }' 00:10:30.909 [2024-10-01 06:02:56.381258] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.909 [2024-10-01 06:02:56.381468] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.909 [2024-10-01 06:02:56.393261] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.909 [2024-10-01 06:02:56.393307] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.909 [2024-10-01 06:02:56.405256] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.909 [2024-10-01 06:02:56.405299] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.909 [2024-10-01 06:02:56.405759] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:10:30.909 [2024-10-01 06:02:56.405835] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77340 ] 00:10:30.909 [2024-10-01 06:02:56.417262] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.909 [2024-10-01 06:02:56.417303] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.909 [2024-10-01 06:02:56.429309] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.909 [2024-10-01 06:02:56.429354] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.910 [2024-10-01 06:02:56.441263] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.910 [2024-10-01 06:02:56.441334] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.910 [2024-10-01 06:02:56.453276] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.910 [2024-10-01 06:02:56.453318] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.910 [2024-10-01 06:02:56.465281] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.910 [2024-10-01 06:02:56.465324] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.910 [2024-10-01 06:02:56.477285] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.910 [2024-10-01 06:02:56.477328] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.910 [2024-10-01 06:02:56.489274] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.910 [2024-10-01 06:02:56.489328] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.910 [2024-10-01 06:02:56.501289] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.910 [2024-10-01 06:02:56.501330] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.910 [2024-10-01 06:02:56.513288] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.910 [2024-10-01 06:02:56.513344] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.169 [2024-10-01 06:02:56.525290] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.169 [2024-10-01 06:02:56.525316] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.169 [2024-10-01 06:02:56.537305] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.169 [2024-10-01 06:02:56.537343] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.169 [2024-10-01 06:02:56.541076] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:31.169 [2024-10-01 06:02:56.549336] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.169 [2024-10-01 06:02:56.549384] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.169 [2024-10-01 06:02:56.561335] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.169 [2024-10-01 06:02:56.561382] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.169 [2024-10-01 06:02:56.573339] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.169 [2024-10-01 06:02:56.573387] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.169 [2024-10-01 06:02:56.578488] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:31.169 [2024-10-01 06:02:56.585313] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.169 [2024-10-01 06:02:56.585350] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.169 [2024-10-01 06:02:56.597363] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.169 [2024-10-01 06:02:56.597395] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.169 [2024-10-01 06:02:56.609367] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.169 [2024-10-01 06:02:56.609415] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.169 [2024-10-01 06:02:56.617443] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:31.169 [2024-10-01 06:02:56.621361] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.169 [2024-10-01 06:02:56.621403] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.169 [2024-10-01 06:02:56.633368] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.169 [2024-10-01 06:02:56.633416] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.169 [2024-10-01 06:02:56.645401] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.169 [2024-10-01 06:02:56.645447] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.169 [2024-10-01 06:02:56.657372] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.169 [2024-10-01 06:02:56.657416] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.169 [2024-10-01 06:02:56.669387] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.169 [2024-10-01 06:02:56.669432] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.169 [2024-10-01 06:02:56.681388] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.169 [2024-10-01 06:02:56.681434] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.169 [2024-10-01 06:02:56.693386] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.169 [2024-10-01 06:02:56.693428] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.169 [2024-10-01 06:02:56.705437] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.169 [2024-10-01 06:02:56.705484] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.169 [2024-10-01 06:02:56.717428] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.169 [2024-10-01 06:02:56.717473] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.169 Running I/O for 5 seconds... 00:10:31.169 [2024-10-01 06:02:56.736062] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.169 [2024-10-01 06:02:56.736109] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.169 [2024-10-01 06:02:56.750512] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.169 [2024-10-01 06:02:56.750558] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.169 [2024-10-01 06:02:56.765881] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.169 [2024-10-01 06:02:56.765936] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.428 [2024-10-01 06:02:56.785314] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.428 [2024-10-01 06:02:56.785360] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.428 [2024-10-01 06:02:56.800543] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.428 [2024-10-01 06:02:56.800589] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.428 [2024-10-01 06:02:56.817516] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.428 [2024-10-01 06:02:56.817564] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.428 [2024-10-01 06:02:56.834099] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.428 [2024-10-01 06:02:56.834146] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.428 [2024-10-01 06:02:56.850097] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.428 [2024-10-01 06:02:56.850143] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.428 [2024-10-01 06:02:56.859889] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.428 [2024-10-01 06:02:56.859957] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.428 [2024-10-01 06:02:56.874874] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.428 [2024-10-01 06:02:56.874936] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.428 [2024-10-01 06:02:56.890227] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.428 [2024-10-01 06:02:56.890260] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.428 [2024-10-01 06:02:56.900446] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.428 [2024-10-01 06:02:56.900493] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.428 [2024-10-01 06:02:56.918265] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.428 [2024-10-01 06:02:56.918297] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.428 [2024-10-01 06:02:56.933597] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.428 [2024-10-01 06:02:56.933645] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.428 [2024-10-01 06:02:56.943544] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.428 [2024-10-01 06:02:56.943577] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.428 [2024-10-01 06:02:56.960205] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.428 [2024-10-01 06:02:56.960235] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.429 [2024-10-01 06:02:56.976719] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.429 [2024-10-01 06:02:56.976765] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.429 [2024-10-01 06:02:56.992804] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.429 [2024-10-01 06:02:56.992849] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.429 [2024-10-01 06:02:57.009459] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.429 [2024-10-01 06:02:57.009505] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.429 [2024-10-01 06:02:57.026580] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.429 [2024-10-01 06:02:57.026648] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.429 [2024-10-01 06:02:57.042694] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.429 [2024-10-01 06:02:57.042760] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.687 [2024-10-01 06:02:57.060846] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.687 [2024-10-01 06:02:57.060936] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.687 [2024-10-01 06:02:57.076338] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.687 [2024-10-01 06:02:57.076404] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.687 [2024-10-01 06:02:57.093163] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.687 [2024-10-01 06:02:57.093216] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.687 [2024-10-01 06:02:57.109107] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.687 [2024-10-01 06:02:57.109173] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.687 [2024-10-01 06:02:57.118427] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.687 [2024-10-01 06:02:57.118486] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.687 [2024-10-01 06:02:57.135378] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.687 [2024-10-01 06:02:57.135427] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.688 [2024-10-01 06:02:57.151517] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.688 [2024-10-01 06:02:57.151559] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.688 [2024-10-01 06:02:57.169692] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.688 [2024-10-01 06:02:57.169756] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.688 [2024-10-01 06:02:57.184842] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.688 [2024-10-01 06:02:57.184893] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.688 [2024-10-01 06:02:57.194529] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.688 [2024-10-01 06:02:57.194587] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.688 [2024-10-01 06:02:57.211175] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.688 [2024-10-01 06:02:57.211228] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.688 [2024-10-01 06:02:57.228217] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.688 [2024-10-01 06:02:57.228287] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.688 [2024-10-01 06:02:57.246044] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.688 [2024-10-01 06:02:57.246087] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.688 [2024-10-01 06:02:57.260754] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.688 [2024-10-01 06:02:57.260799] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.688 [2024-10-01 06:02:57.277452] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.688 [2024-10-01 06:02:57.277497] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.688 [2024-10-01 06:02:57.293297] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.688 [2024-10-01 06:02:57.293387] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.947 [2024-10-01 06:02:57.303729] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.947 [2024-10-01 06:02:57.303797] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.947 [2024-10-01 06:02:57.319285] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.947 [2024-10-01 06:02:57.319318] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.947 [2024-10-01 06:02:57.334523] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.947 [2024-10-01 06:02:57.334569] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.947 [2024-10-01 06:02:57.350122] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.947 [2024-10-01 06:02:57.350168] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.947 [2024-10-01 06:02:57.367121] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.947 [2024-10-01 06:02:57.367166] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.947 [2024-10-01 06:02:57.383993] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.947 [2024-10-01 06:02:57.384037] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.947 [2024-10-01 06:02:57.399837] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.947 [2024-10-01 06:02:57.399876] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.947 [2024-10-01 06:02:57.417742] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.947 [2024-10-01 06:02:57.417787] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.947 [2024-10-01 06:02:57.433168] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.947 [2024-10-01 06:02:57.433198] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.947 [2024-10-01 06:02:57.442732] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.947 [2024-10-01 06:02:57.442776] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.947 [2024-10-01 06:02:57.459514] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.947 [2024-10-01 06:02:57.459544] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.947 [2024-10-01 06:02:57.476884] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.947 [2024-10-01 06:02:57.476939] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.947 [2024-10-01 06:02:57.491897] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.947 [2024-10-01 06:02:57.491967] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.947 [2024-10-01 06:02:57.508059] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.947 [2024-10-01 06:02:57.508088] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.947 [2024-10-01 06:02:57.524062] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.947 [2024-10-01 06:02:57.524104] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.947 [2024-10-01 06:02:57.541596] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.947 [2024-10-01 06:02:57.541640] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.947 [2024-10-01 06:02:57.558166] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.947 [2024-10-01 06:02:57.558207] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.207 [2024-10-01 06:02:57.573895] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.207 [2024-10-01 06:02:57.573963] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.207 [2024-10-01 06:02:57.585453] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.207 [2024-10-01 06:02:57.585498] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.207 [2024-10-01 06:02:57.601996] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.207 [2024-10-01 06:02:57.602022] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.207 [2024-10-01 06:02:57.618053] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.207 [2024-10-01 06:02:57.618110] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.207 [2024-10-01 06:02:57.636304] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.207 [2024-10-01 06:02:57.636383] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.207 [2024-10-01 06:02:57.651453] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.207 [2024-10-01 06:02:57.651489] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.207 [2024-10-01 06:02:57.667125] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.207 [2024-10-01 06:02:57.667170] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.207 [2024-10-01 06:02:57.683835] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.207 [2024-10-01 06:02:57.683879] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.207 [2024-10-01 06:02:57.700201] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.207 [2024-10-01 06:02:57.700243] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.207 [2024-10-01 06:02:57.716998] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.207 [2024-10-01 06:02:57.717061] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.207 11655.00 IOPS, 91.05 MiB/s [2024-10-01 06:02:57.732693] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.207 [2024-10-01 06:02:57.732753] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.207 [2024-10-01 06:02:57.743415] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.207 [2024-10-01 06:02:57.743456] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.207 [2024-10-01 06:02:57.757474] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.207 [2024-10-01 06:02:57.757531] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.207 [2024-10-01 06:02:57.767865] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.207 [2024-10-01 06:02:57.767909] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.207 [2024-10-01 06:02:57.781922] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.207 [2024-10-01 06:02:57.781976] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.207 [2024-10-01 06:02:57.797316] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.207 [2024-10-01 06:02:57.797360] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.207 [2024-10-01 06:02:57.815120] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.207 [2024-10-01 06:02:57.815166] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.466 [2024-10-01 06:02:57.830065] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.466 [2024-10-01 06:02:57.830115] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.466 [2024-10-01 06:02:57.845000] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.466 [2024-10-01 06:02:57.845036] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.466 [2024-10-01 06:02:57.861553] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.466 [2024-10-01 06:02:57.861601] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.466 [2024-10-01 06:02:57.877724] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.466 [2024-10-01 06:02:57.877775] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.466 [2024-10-01 06:02:57.894574] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.466 [2024-10-01 06:02:57.894631] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.466 [2024-10-01 06:02:57.910451] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.466 [2024-10-01 06:02:57.910502] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.466 [2024-10-01 06:02:57.920081] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.466 [2024-10-01 06:02:57.920132] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.466 [2024-10-01 06:02:57.935036] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.466 [2024-10-01 06:02:57.935085] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.466 [2024-10-01 06:02:57.950087] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.466 [2024-10-01 06:02:57.950131] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.466 [2024-10-01 06:02:57.967836] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.466 [2024-10-01 06:02:57.967880] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.466 [2024-10-01 06:02:57.983743] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.466 [2024-10-01 06:02:57.983800] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.466 [2024-10-01 06:02:58.001187] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.466 [2024-10-01 06:02:58.001216] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.466 [2024-10-01 06:02:58.016735] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.466 [2024-10-01 06:02:58.016781] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.466 [2024-10-01 06:02:58.033646] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.466 [2024-10-01 06:02:58.033690] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.466 [2024-10-01 06:02:58.051277] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.466 [2024-10-01 06:02:58.051327] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.466 [2024-10-01 06:02:58.065474] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.466 [2024-10-01 06:02:58.065517] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.725 [2024-10-01 06:02:58.082513] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.725 [2024-10-01 06:02:58.082556] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.725 [2024-10-01 06:02:58.096482] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.725 [2024-10-01 06:02:58.096525] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.725 [2024-10-01 06:02:58.114427] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.725 [2024-10-01 06:02:58.114499] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.725 [2024-10-01 06:02:58.129288] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.725 [2024-10-01 06:02:58.129337] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.725 [2024-10-01 06:02:58.139063] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.725 [2024-10-01 06:02:58.139096] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.725 [2024-10-01 06:02:58.153928] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.725 [2024-10-01 06:02:58.154020] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.725 [2024-10-01 06:02:58.164301] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.725 [2024-10-01 06:02:58.164374] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.725 [2024-10-01 06:02:58.178525] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.725 [2024-10-01 06:02:58.178569] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.725 [2024-10-01 06:02:58.196813] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.725 [2024-10-01 06:02:58.196856] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.725 [2024-10-01 06:02:58.211774] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.725 [2024-10-01 06:02:58.211818] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.725 [2024-10-01 06:02:58.227716] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.725 [2024-10-01 06:02:58.227776] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.725 [2024-10-01 06:02:58.238290] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.725 [2024-10-01 06:02:58.238336] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.725 [2024-10-01 06:02:58.253385] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.725 [2024-10-01 06:02:58.253415] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.725 [2024-10-01 06:02:58.270582] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.725 [2024-10-01 06:02:58.270627] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.725 [2024-10-01 06:02:58.286991] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.725 [2024-10-01 06:02:58.287051] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.725 [2024-10-01 06:02:58.304216] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.725 [2024-10-01 06:02:58.304275] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.725 [2024-10-01 06:02:58.320504] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.725 [2024-10-01 06:02:58.320548] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.725 [2024-10-01 06:02:58.338341] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.725 [2024-10-01 06:02:58.338402] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.985 [2024-10-01 06:02:58.352804] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.985 [2024-10-01 06:02:58.352848] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.985 [2024-10-01 06:02:58.369065] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.985 [2024-10-01 06:02:58.369094] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.985 [2024-10-01 06:02:58.384924] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.985 [2024-10-01 06:02:58.384997] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.985 [2024-10-01 06:02:58.401709] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.985 [2024-10-01 06:02:58.401752] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.985 [2024-10-01 06:02:58.419447] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.985 [2024-10-01 06:02:58.419477] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.985 [2024-10-01 06:02:58.434472] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.985 [2024-10-01 06:02:58.434517] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.985 [2024-10-01 06:02:58.450967] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.985 [2024-10-01 06:02:58.451009] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.985 [2024-10-01 06:02:58.467990] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.985 [2024-10-01 06:02:58.468034] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.985 [2024-10-01 06:02:58.484101] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.985 [2024-10-01 06:02:58.484144] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.985 [2024-10-01 06:02:58.501951] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.985 [2024-10-01 06:02:58.501995] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.985 [2024-10-01 06:02:58.515962] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.985 [2024-10-01 06:02:58.515998] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.985 [2024-10-01 06:02:58.530922] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.985 [2024-10-01 06:02:58.530995] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.985 [2024-10-01 06:02:58.539975] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.985 [2024-10-01 06:02:58.540010] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.985 [2024-10-01 06:02:58.555400] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.985 [2024-10-01 06:02:58.555431] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.985 [2024-10-01 06:02:58.570802] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.985 [2024-10-01 06:02:58.570864] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.985 [2024-10-01 06:02:58.588277] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.985 [2024-10-01 06:02:58.588320] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.244 [2024-10-01 06:02:58.603689] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.244 [2024-10-01 06:02:58.603751] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.244 [2024-10-01 06:02:58.622675] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.244 [2024-10-01 06:02:58.622720] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.244 [2024-10-01 06:02:58.636845] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.244 [2024-10-01 06:02:58.636888] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.244 [2024-10-01 06:02:58.654511] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.244 [2024-10-01 06:02:58.654545] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.244 [2024-10-01 06:02:58.669796] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.244 [2024-10-01 06:02:58.669830] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.244 [2024-10-01 06:02:58.680181] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.244 [2024-10-01 06:02:58.680213] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.244 [2024-10-01 06:02:58.696936] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.244 [2024-10-01 06:02:58.697006] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.244 [2024-10-01 06:02:58.712553] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.244 [2024-10-01 06:02:58.712596] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.244 11874.50 IOPS, 92.77 MiB/s [2024-10-01 06:02:58.729841] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.244 [2024-10-01 06:02:58.729885] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.244 [2024-10-01 06:02:58.746295] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.244 [2024-10-01 06:02:58.746338] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.244 [2024-10-01 06:02:58.762579] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.244 [2024-10-01 06:02:58.762624] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.244 [2024-10-01 06:02:58.780176] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.244 [2024-10-01 06:02:58.780219] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.244 [2024-10-01 06:02:58.795191] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.244 [2024-10-01 06:02:58.795235] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.244 [2024-10-01 06:02:58.806641] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.244 [2024-10-01 06:02:58.806684] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.244 [2024-10-01 06:02:58.823449] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.244 [2024-10-01 06:02:58.823495] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.244 [2024-10-01 06:02:58.839259] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.244 [2024-10-01 06:02:58.839303] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.244 [2024-10-01 06:02:58.858388] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.244 [2024-10-01 06:02:58.858434] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.503 [2024-10-01 06:02:58.873008] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.503 [2024-10-01 06:02:58.873053] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.503 [2024-10-01 06:02:58.885153] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.503 [2024-10-01 06:02:58.885199] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.503 [2024-10-01 06:02:58.900869] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.503 [2024-10-01 06:02:58.900927] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.503 [2024-10-01 06:02:58.917995] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.503 [2024-10-01 06:02:58.918049] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.503 [2024-10-01 06:02:58.934190] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.503 [2024-10-01 06:02:58.934222] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.503 [2024-10-01 06:02:58.944048] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.503 [2024-10-01 06:02:58.944092] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.503 [2024-10-01 06:02:58.958511] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.503 [2024-10-01 06:02:58.958554] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.503 [2024-10-01 06:02:58.974889] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.503 [2024-10-01 06:02:58.974958] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.503 [2024-10-01 06:02:58.992255] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.503 [2024-10-01 06:02:58.992303] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.503 [2024-10-01 06:02:59.008260] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.503 [2024-10-01 06:02:59.008293] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.503 [2024-10-01 06:02:59.026348] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.503 [2024-10-01 06:02:59.026393] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.503 [2024-10-01 06:02:59.042321] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.503 [2024-10-01 06:02:59.042365] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.503 [2024-10-01 06:02:59.059139] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.503 [2024-10-01 06:02:59.059185] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.503 [2024-10-01 06:02:59.076082] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.503 [2024-10-01 06:02:59.076127] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.503 [2024-10-01 06:02:59.092440] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.503 [2024-10-01 06:02:59.092486] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.503 [2024-10-01 06:02:59.109562] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.503 [2024-10-01 06:02:59.109606] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.762 [2024-10-01 06:02:59.125858] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.762 [2024-10-01 06:02:59.125902] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.762 [2024-10-01 06:02:59.143943] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.762 [2024-10-01 06:02:59.143997] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.762 [2024-10-01 06:02:59.159025] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.762 [2024-10-01 06:02:59.159068] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.762 [2024-10-01 06:02:59.169991] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.762 [2024-10-01 06:02:59.170036] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.762 [2024-10-01 06:02:59.185745] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.762 [2024-10-01 06:02:59.185790] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.762 [2024-10-01 06:02:59.196273] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.762 [2024-10-01 06:02:59.196320] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.762 [2024-10-01 06:02:59.210723] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.762 [2024-10-01 06:02:59.210771] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.762 [2024-10-01 06:02:59.220928] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.762 [2024-10-01 06:02:59.221002] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.762 [2024-10-01 06:02:59.235946] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.762 [2024-10-01 06:02:59.236028] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.763 [2024-10-01 06:02:59.253171] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.763 [2024-10-01 06:02:59.253204] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.763 [2024-10-01 06:02:59.269877] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.763 [2024-10-01 06:02:59.269932] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.763 [2024-10-01 06:02:59.286324] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.763 [2024-10-01 06:02:59.286372] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.763 [2024-10-01 06:02:59.302571] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.763 [2024-10-01 06:02:59.302615] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.763 [2024-10-01 06:02:59.321251] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.763 [2024-10-01 06:02:59.321326] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.763 [2024-10-01 06:02:59.335817] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.763 [2024-10-01 06:02:59.335845] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.763 [2024-10-01 06:02:59.351467] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.763 [2024-10-01 06:02:59.351499] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.763 [2024-10-01 06:02:59.360966] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.763 [2024-10-01 06:02:59.361011] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.763 [2024-10-01 06:02:59.373240] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.763 [2024-10-01 06:02:59.373301] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.022 [2024-10-01 06:02:59.389373] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.022 [2024-10-01 06:02:59.389418] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.022 [2024-10-01 06:02:59.398349] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.022 [2024-10-01 06:02:59.398394] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.022 [2024-10-01 06:02:59.413797] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.022 [2024-10-01 06:02:59.413843] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.022 [2024-10-01 06:02:59.430383] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.022 [2024-10-01 06:02:59.430428] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.022 [2024-10-01 06:02:59.446450] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.022 [2024-10-01 06:02:59.446495] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.022 [2024-10-01 06:02:59.463593] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.022 [2024-10-01 06:02:59.463638] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.022 [2024-10-01 06:02:59.480822] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.022 [2024-10-01 06:02:59.480865] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.022 [2024-10-01 06:02:59.497165] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.022 [2024-10-01 06:02:59.497195] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.022 [2024-10-01 06:02:59.515845] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.022 [2024-10-01 06:02:59.515889] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.022 [2024-10-01 06:02:59.531430] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.022 [2024-10-01 06:02:59.531462] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.022 [2024-10-01 06:02:59.548496] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.022 [2024-10-01 06:02:59.548540] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.022 [2024-10-01 06:02:59.564588] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.022 [2024-10-01 06:02:59.564617] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.022 [2024-10-01 06:02:59.582589] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.022 [2024-10-01 06:02:59.582634] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.022 [2024-10-01 06:02:59.597441] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.022 [2024-10-01 06:02:59.597487] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.022 [2024-10-01 06:02:59.613831] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.022 [2024-10-01 06:02:59.613876] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.022 [2024-10-01 06:02:59.625128] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.022 [2024-10-01 06:02:59.625172] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.282 [2024-10-01 06:02:59.640623] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.282 [2024-10-01 06:02:59.640654] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.282 [2024-10-01 06:02:59.656967] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.282 [2024-10-01 06:02:59.657026] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.282 [2024-10-01 06:02:59.673871] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.282 [2024-10-01 06:02:59.673946] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.282 [2024-10-01 06:02:59.690217] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.282 [2024-10-01 06:02:59.690261] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.282 [2024-10-01 06:02:59.709013] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.282 [2024-10-01 06:02:59.709057] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.282 12005.00 IOPS, 93.79 MiB/s [2024-10-01 06:02:59.723811] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.282 [2024-10-01 06:02:59.723855] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.282 [2024-10-01 06:02:59.739403] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.282 [2024-10-01 06:02:59.739451] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.282 [2024-10-01 06:02:59.758441] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.282 [2024-10-01 06:02:59.758485] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.282 [2024-10-01 06:02:59.772418] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.282 [2024-10-01 06:02:59.772446] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.282 [2024-10-01 06:02:59.789501] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.282 [2024-10-01 06:02:59.789547] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.282 [2024-10-01 06:02:59.804607] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.282 [2024-10-01 06:02:59.804654] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.282 [2024-10-01 06:02:59.814562] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.282 [2024-10-01 06:02:59.814606] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.282 [2024-10-01 06:02:59.829894] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.282 [2024-10-01 06:02:59.829966] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.282 [2024-10-01 06:02:59.841778] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.282 [2024-10-01 06:02:59.841822] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.282 [2024-10-01 06:02:59.858396] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.282 [2024-10-01 06:02:59.858440] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.282 [2024-10-01 06:02:59.875411] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.282 [2024-10-01 06:02:59.875444] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.282 [2024-10-01 06:02:59.891890] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.282 [2024-10-01 06:02:59.891962] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.541 [2024-10-01 06:02:59.907646] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.541 [2024-10-01 06:02:59.907703] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.541 [2024-10-01 06:02:59.918931] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.541 [2024-10-01 06:02:59.918985] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.541 [2024-10-01 06:02:59.935465] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.541 [2024-10-01 06:02:59.935512] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.541 [2024-10-01 06:02:59.951716] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.541 [2024-10-01 06:02:59.951776] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.541 [2024-10-01 06:02:59.969557] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.541 [2024-10-01 06:02:59.969600] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.541 [2024-10-01 06:02:59.985028] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.541 [2024-10-01 06:02:59.985073] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.541 [2024-10-01 06:03:00.001049] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.541 [2024-10-01 06:03:00.001094] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.541 [2024-10-01 06:03:00.016839] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.541 [2024-10-01 06:03:00.016877] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.541 [2024-10-01 06:03:00.031823] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.541 [2024-10-01 06:03:00.031873] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.541 [2024-10-01 06:03:00.047976] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.541 [2024-10-01 06:03:00.048034] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.541 [2024-10-01 06:03:00.065691] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.541 [2024-10-01 06:03:00.065737] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.541 [2024-10-01 06:03:00.081613] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.541 [2024-10-01 06:03:00.081657] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.541 [2024-10-01 06:03:00.100622] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.541 [2024-10-01 06:03:00.100669] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.541 [2024-10-01 06:03:00.115653] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.541 [2024-10-01 06:03:00.115701] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.541 [2024-10-01 06:03:00.125677] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.541 [2024-10-01 06:03:00.125740] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.541 [2024-10-01 06:03:00.141872] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.541 [2024-10-01 06:03:00.141942] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.800 [2024-10-01 06:03:00.157258] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.800 [2024-10-01 06:03:00.157315] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.800 [2024-10-01 06:03:00.174401] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.800 [2024-10-01 06:03:00.174447] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.800 [2024-10-01 06:03:00.190783] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.800 [2024-10-01 06:03:00.190827] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.800 [2024-10-01 06:03:00.208816] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.800 [2024-10-01 06:03:00.208859] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.800 [2024-10-01 06:03:00.223506] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.800 [2024-10-01 06:03:00.223538] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.800 [2024-10-01 06:03:00.239071] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.800 [2024-10-01 06:03:00.239115] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.800 [2024-10-01 06:03:00.256370] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.800 [2024-10-01 06:03:00.256414] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.800 [2024-10-01 06:03:00.273423] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.800 [2024-10-01 06:03:00.273481] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.800 [2024-10-01 06:03:00.288667] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.800 [2024-10-01 06:03:00.288697] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.800 [2024-10-01 06:03:00.305198] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.800 [2024-10-01 06:03:00.305245] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.801 [2024-10-01 06:03:00.321191] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.801 [2024-10-01 06:03:00.321236] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.801 [2024-10-01 06:03:00.339388] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.801 [2024-10-01 06:03:00.339421] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.801 [2024-10-01 06:03:00.355150] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.801 [2024-10-01 06:03:00.355179] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.801 [2024-10-01 06:03:00.371873] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.801 [2024-10-01 06:03:00.371943] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.801 [2024-10-01 06:03:00.389125] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.801 [2024-10-01 06:03:00.389171] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.801 [2024-10-01 06:03:00.405203] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.801 [2024-10-01 06:03:00.405264] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.059 [2024-10-01 06:03:00.423271] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.059 [2024-10-01 06:03:00.423333] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.059 [2024-10-01 06:03:00.433653] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.059 [2024-10-01 06:03:00.433700] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.059 [2024-10-01 06:03:00.449299] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.059 [2024-10-01 06:03:00.449333] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.059 [2024-10-01 06:03:00.465199] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.059 [2024-10-01 06:03:00.465231] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.059 [2024-10-01 06:03:00.483703] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.059 [2024-10-01 06:03:00.483764] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.059 [2024-10-01 06:03:00.498540] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.059 [2024-10-01 06:03:00.498584] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.059 [2024-10-01 06:03:00.516006] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.059 [2024-10-01 06:03:00.516050] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.059 [2024-10-01 06:03:00.530866] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.059 [2024-10-01 06:03:00.530923] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.059 [2024-10-01 06:03:00.547519] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.059 [2024-10-01 06:03:00.547565] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.059 [2024-10-01 06:03:00.563463] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.059 [2024-10-01 06:03:00.563508] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.059 [2024-10-01 06:03:00.572574] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.059 [2024-10-01 06:03:00.572618] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.059 [2024-10-01 06:03:00.588709] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.059 [2024-10-01 06:03:00.588753] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.059 [2024-10-01 06:03:00.598385] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.059 [2024-10-01 06:03:00.598429] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.059 [2024-10-01 06:03:00.614341] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.060 [2024-10-01 06:03:00.614404] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.060 [2024-10-01 06:03:00.623829] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.060 [2024-10-01 06:03:00.623874] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.060 [2024-10-01 06:03:00.640400] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.060 [2024-10-01 06:03:00.640444] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.060 [2024-10-01 06:03:00.656670] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.060 [2024-10-01 06:03:00.656714] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.060 [2024-10-01 06:03:00.673842] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.060 [2024-10-01 06:03:00.673887] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.318 [2024-10-01 06:03:00.688591] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.318 [2024-10-01 06:03:00.688636] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.318 [2024-10-01 06:03:00.705114] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.318 [2024-10-01 06:03:00.705151] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.318 11947.25 IOPS, 93.34 MiB/s [2024-10-01 06:03:00.720316] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.318 [2024-10-01 06:03:00.720345] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.318 [2024-10-01 06:03:00.735697] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.318 [2024-10-01 06:03:00.735738] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.318 [2024-10-01 06:03:00.745430] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.318 [2024-10-01 06:03:00.745474] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.318 [2024-10-01 06:03:00.761904] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.318 [2024-10-01 06:03:00.761957] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.318 [2024-10-01 06:03:00.777647] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.318 [2024-10-01 06:03:00.777691] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.318 [2024-10-01 06:03:00.787190] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.318 [2024-10-01 06:03:00.787234] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.318 [2024-10-01 06:03:00.803093] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.318 [2024-10-01 06:03:00.803136] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.318 [2024-10-01 06:03:00.814712] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.318 [2024-10-01 06:03:00.814757] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.318 [2024-10-01 06:03:00.831619] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.318 [2024-10-01 06:03:00.831652] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.318 [2024-10-01 06:03:00.848112] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.318 [2024-10-01 06:03:00.848155] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.318 [2024-10-01 06:03:00.864816] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.318 [2024-10-01 06:03:00.864860] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.318 [2024-10-01 06:03:00.881172] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.318 [2024-10-01 06:03:00.881217] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.318 [2024-10-01 06:03:00.900775] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.318 [2024-10-01 06:03:00.900819] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.318 [2024-10-01 06:03:00.915053] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.318 [2024-10-01 06:03:00.915098] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.318 [2024-10-01 06:03:00.930954] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.318 [2024-10-01 06:03:00.931008] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.576 [2024-10-01 06:03:00.947039] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.576 [2024-10-01 06:03:00.947084] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.576 [2024-10-01 06:03:00.965890] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.576 [2024-10-01 06:03:00.965956] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.576 [2024-10-01 06:03:00.981397] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.576 [2024-10-01 06:03:00.981455] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.576 [2024-10-01 06:03:00.997247] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.576 [2024-10-01 06:03:00.997278] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.576 [2024-10-01 06:03:01.006463] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.576 [2024-10-01 06:03:01.006505] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.576 [2024-10-01 06:03:01.023040] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.576 [2024-10-01 06:03:01.023085] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.576 [2024-10-01 06:03:01.039787] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.576 [2024-10-01 06:03:01.039833] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.576 [2024-10-01 06:03:01.056081] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.576 [2024-10-01 06:03:01.056113] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.576 [2024-10-01 06:03:01.074426] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.576 [2024-10-01 06:03:01.074471] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.576 [2024-10-01 06:03:01.089342] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.576 [2024-10-01 06:03:01.089388] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.576 [2024-10-01 06:03:01.104802] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.576 [2024-10-01 06:03:01.104846] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.576 [2024-10-01 06:03:01.122020] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.576 [2024-10-01 06:03:01.122064] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.576 [2024-10-01 06:03:01.138086] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.576 [2024-10-01 06:03:01.138131] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.576 [2024-10-01 06:03:01.157191] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.576 [2024-10-01 06:03:01.157237] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.576 [2024-10-01 06:03:01.171045] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.576 [2024-10-01 06:03:01.171090] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.576 [2024-10-01 06:03:01.187049] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.576 [2024-10-01 06:03:01.187093] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.835 [2024-10-01 06:03:01.203895] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.835 [2024-10-01 06:03:01.203962] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.835 [2024-10-01 06:03:01.219580] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.835 [2024-10-01 06:03:01.219626] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.835 [2024-10-01 06:03:01.234494] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.835 [2024-10-01 06:03:01.234539] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.835 [2024-10-01 06:03:01.243727] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.835 [2024-10-01 06:03:01.243772] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.835 [2024-10-01 06:03:01.256484] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.835 [2024-10-01 06:03:01.256528] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.835 [2024-10-01 06:03:01.272691] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.835 [2024-10-01 06:03:01.272735] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.835 [2024-10-01 06:03:01.289111] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.835 [2024-10-01 06:03:01.289155] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.835 [2024-10-01 06:03:01.304365] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.835 [2024-10-01 06:03:01.304425] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.835 [2024-10-01 06:03:01.319938] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.835 [2024-10-01 06:03:01.319997] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.835 [2024-10-01 06:03:01.330165] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.835 [2024-10-01 06:03:01.330195] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.835 [2024-10-01 06:03:01.345489] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.835 [2024-10-01 06:03:01.345535] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.835 [2024-10-01 06:03:01.356055] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.835 [2024-10-01 06:03:01.356087] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.835 [2024-10-01 06:03:01.371253] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.835 [2024-10-01 06:03:01.371284] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.835 [2024-10-01 06:03:01.386192] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.835 [2024-10-01 06:03:01.386224] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.835 [2024-10-01 06:03:01.402621] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.835 [2024-10-01 06:03:01.402666] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.835 [2024-10-01 06:03:01.419348] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.835 [2024-10-01 06:03:01.419396] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.835 [2024-10-01 06:03:01.435447] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.835 [2024-10-01 06:03:01.435480] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.093 [2024-10-01 06:03:01.454498] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.093 [2024-10-01 06:03:01.454544] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.093 [2024-10-01 06:03:01.469421] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.093 [2024-10-01 06:03:01.469467] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.093 [2024-10-01 06:03:01.485493] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.093 [2024-10-01 06:03:01.485538] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.093 [2024-10-01 06:03:01.502038] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.093 [2024-10-01 06:03:01.502082] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.093 [2024-10-01 06:03:01.519095] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.093 [2024-10-01 06:03:01.519139] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.093 [2024-10-01 06:03:01.535235] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.093 [2024-10-01 06:03:01.535280] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.093 [2024-10-01 06:03:01.553474] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.093 [2024-10-01 06:03:01.553520] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.093 [2024-10-01 06:03:01.568680] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.093 [2024-10-01 06:03:01.568723] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.093 [2024-10-01 06:03:01.580514] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.093 [2024-10-01 06:03:01.580560] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.093 [2024-10-01 06:03:01.597688] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.093 [2024-10-01 06:03:01.597733] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.093 [2024-10-01 06:03:01.612477] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.093 [2024-10-01 06:03:01.612522] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.093 [2024-10-01 06:03:01.622415] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.093 [2024-10-01 06:03:01.622461] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.093 [2024-10-01 06:03:01.637628] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.093 [2024-10-01 06:03:01.637673] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.093 [2024-10-01 06:03:01.648388] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.093 [2024-10-01 06:03:01.648449] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.093 [2024-10-01 06:03:01.663784] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.093 [2024-10-01 06:03:01.663830] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.093 [2024-10-01 06:03:01.679190] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.093 [2024-10-01 06:03:01.679236] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.093 [2024-10-01 06:03:01.694256] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.093 [2024-10-01 06:03:01.694301] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.093 [2024-10-01 06:03:01.703887] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.093 [2024-10-01 06:03:01.703953] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.376 11887.00 IOPS, 92.87 MiB/s [2024-10-01 06:03:01.720974] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.376 [2024-10-01 06:03:01.721008] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.376 00:10:36.376 Latency(us) 00:10:36.376 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:36.376 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:10:36.376 Nvme1n1 : 5.01 11887.09 92.87 0.00 0.00 10754.97 4438.57 22520.55 00:10:36.376 =================================================================================================================== 00:10:36.376 Total : 11887.09 92.87 0.00 0.00 10754.97 4438.57 22520.55 00:10:36.376 [2024-10-01 06:03:01.732628] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.376 [2024-10-01 06:03:01.732673] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.376 [2024-10-01 06:03:01.744618] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.376 [2024-10-01 06:03:01.744660] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.376 [2024-10-01 06:03:01.756649] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.376 [2024-10-01 06:03:01.756704] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.376 [2024-10-01 06:03:01.768644] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.376 [2024-10-01 06:03:01.768695] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.376 [2024-10-01 06:03:01.780652] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.376 [2024-10-01 06:03:01.780704] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.376 [2024-10-01 06:03:01.792654] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.376 [2024-10-01 06:03:01.792705] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.376 [2024-10-01 06:03:01.804657] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.376 [2024-10-01 06:03:01.804708] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.376 [2024-10-01 06:03:01.816650] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.376 [2024-10-01 06:03:01.816692] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.376 [2024-10-01 06:03:01.828666] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.376 [2024-10-01 06:03:01.828716] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.376 [2024-10-01 06:03:01.840651] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.376 [2024-10-01 06:03:01.840692] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.376 [2024-10-01 06:03:01.852663] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.376 [2024-10-01 06:03:01.852710] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.376 [2024-10-01 06:03:01.864659] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.376 [2024-10-01 06:03:01.864696] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.376 [2024-10-01 06:03:01.876644] subsystem.c:2128:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.376 [2024-10-01 06:03:01.876680] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.376 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (77340) - No such process 00:10:36.376 06:03:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 77340 00:10:36.376 06:03:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:36.376 06:03:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.376 06:03:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:36.376 06:03:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.376 06:03:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:36.376 06:03:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.376 06:03:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:36.376 delay0 00:10:36.376 06:03:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.376 06:03:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:10:36.376 06:03:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.376 06:03:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:36.376 06:03:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.376 06:03:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 ns:1' 00:10:36.635 [2024-10-01 06:03:02.069989] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:10:43.192 Initializing NVMe Controllers 00:10:43.192 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:10:43.192 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:43.192 Initialization complete. Launching workers. 00:10:43.192 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 846 00:10:43.192 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 1130, failed to submit 36 00:10:43.192 success 1022, unsuccessful 108, failed 0 00:10:43.192 06:03:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:10:43.192 06:03:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:10:43.192 06:03:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # nvmfcleanup 00:10:43.192 06:03:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:10:43.192 06:03:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:43.192 06:03:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:10:43.192 06:03:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:43.192 06:03:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:43.192 rmmod nvme_tcp 00:10:43.192 rmmod nvme_fabrics 00:10:43.192 rmmod nvme_keyring 00:10:43.193 06:03:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:43.193 06:03:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:10:43.193 06:03:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:10:43.193 06:03:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@513 -- # '[' -n 77193 ']' 00:10:43.193 06:03:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@514 -- # killprocess 77193 00:10:43.193 06:03:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@950 -- # '[' -z 77193 ']' 00:10:43.193 06:03:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # kill -0 77193 00:10:43.193 06:03:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # uname 00:10:43.193 06:03:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:43.193 06:03:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 77193 00:10:43.193 06:03:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:10:43.193 06:03:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:10:43.193 killing process with pid 77193 00:10:43.193 06:03:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@968 -- # echo 'killing process with pid 77193' 00:10:43.193 06:03:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@969 -- # kill 77193 00:10:43.193 06:03:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@974 -- # wait 77193 00:10:43.193 06:03:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:10:43.193 06:03:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:10:43.193 06:03:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:10:43.193 06:03:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:10:43.193 06:03:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@787 -- # iptables-save 00:10:43.193 06:03:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:10:43.193 06:03:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@787 -- # iptables-restore 00:10:43.193 06:03:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:43.193 06:03:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:10:43.193 06:03:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:10:43.193 06:03:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:10:43.193 06:03:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:10:43.193 06:03:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:10:43.193 06:03:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:10:43.193 06:03:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:10:43.193 06:03:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:10:43.193 06:03:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:10:43.193 06:03:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:10:43.193 06:03:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:10:43.193 06:03:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:10:43.193 06:03:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:43.193 06:03:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:43.193 06:03:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@246 -- # remove_spdk_ns 00:10:43.193 06:03:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:43.193 06:03:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:43.193 06:03:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:43.193 06:03:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@300 -- # return 0 00:10:43.193 00:10:43.193 real 0m23.983s 00:10:43.193 user 0m39.091s 00:10:43.193 sys 0m6.824s 00:10:43.193 06:03:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:43.193 06:03:08 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:43.193 ************************************ 00:10:43.193 END TEST nvmf_zcopy 00:10:43.193 ************************************ 00:10:43.452 06:03:08 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:10:43.452 06:03:08 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:43.452 06:03:08 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:43.452 06:03:08 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:43.452 ************************************ 00:10:43.452 START TEST nvmf_nmic 00:10:43.452 ************************************ 00:10:43.452 06:03:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:10:43.452 * Looking for test storage... 00:10:43.452 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:43.452 06:03:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:10:43.452 06:03:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1681 -- # lcov --version 00:10:43.452 06:03:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:10:43.452 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:10:43.452 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:43.452 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:43.452 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:43.452 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:10:43.452 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:10:43.452 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:10:43.452 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:10:43.452 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:10:43.452 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:10:43.452 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:10:43.452 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:43.452 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:10:43.452 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:10:43.452 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:43.452 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:43.452 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:10:43.452 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:10:43.452 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:43.452 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:10:43.452 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:10:43.452 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:10:43.452 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:10:43.452 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:43.452 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:10:43.452 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:10:43.452 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:43.452 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:43.452 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:10:43.452 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:43.452 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:10:43.452 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:43.452 --rc genhtml_branch_coverage=1 00:10:43.452 --rc genhtml_function_coverage=1 00:10:43.452 --rc genhtml_legend=1 00:10:43.452 --rc geninfo_all_blocks=1 00:10:43.452 --rc geninfo_unexecuted_blocks=1 00:10:43.452 00:10:43.452 ' 00:10:43.452 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:10:43.452 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:43.452 --rc genhtml_branch_coverage=1 00:10:43.452 --rc genhtml_function_coverage=1 00:10:43.452 --rc genhtml_legend=1 00:10:43.452 --rc geninfo_all_blocks=1 00:10:43.452 --rc geninfo_unexecuted_blocks=1 00:10:43.452 00:10:43.452 ' 00:10:43.452 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:10:43.452 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:43.452 --rc genhtml_branch_coverage=1 00:10:43.452 --rc genhtml_function_coverage=1 00:10:43.452 --rc genhtml_legend=1 00:10:43.452 --rc geninfo_all_blocks=1 00:10:43.452 --rc geninfo_unexecuted_blocks=1 00:10:43.452 00:10:43.452 ' 00:10:43.452 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:10:43.452 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:43.452 --rc genhtml_branch_coverage=1 00:10:43.452 --rc genhtml_function_coverage=1 00:10:43.452 --rc genhtml_legend=1 00:10:43.452 --rc geninfo_all_blocks=1 00:10:43.452 --rc geninfo_unexecuted_blocks=1 00:10:43.452 00:10:43.452 ' 00:10:43.452 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:43.452 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:10:43.452 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:43.452 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:43.453 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:43.453 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:43.453 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:43.453 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:43.453 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:43.453 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:43.453 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:43.453 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:43.453 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 00:10:43.453 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=a979a798-a221-4879-b3c4-5aaa753fde06 00:10:43.453 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:43.453 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:43.453 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:43.453 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:43.453 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:43.453 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:10:43.453 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:43.453 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:43.453 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:43.453 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:43.453 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:43.453 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:43.453 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:10:43.453 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:43.453 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:10:43.453 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:43.453 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:43.453 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:43.453 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:43.453 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:43.453 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:43.453 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:43.453 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:43.453 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:43.453 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:43.453 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:43.453 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:43.453 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:10:43.453 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:10:43.453 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:43.453 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@472 -- # prepare_net_devs 00:10:43.453 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@434 -- # local -g is_hw=no 00:10:43.453 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@436 -- # remove_spdk_ns 00:10:43.453 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:43.453 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:43.453 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:43.453 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:10:43.453 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:10:43.453 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:10:43.453 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:10:43.453 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:10:43.453 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@456 -- # nvmf_veth_init 00:10:43.453 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:43.453 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:10:43.453 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:10:43.453 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:10:43.453 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:43.453 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:10:43.453 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:43.453 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:10:43.453 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:43.453 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:10:43.453 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:43.453 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:43.453 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:43.453 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:43.453 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:43.453 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:43.453 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:10:43.711 Cannot find device "nvmf_init_br" 00:10:43.711 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@162 -- # true 00:10:43.711 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:10:43.711 Cannot find device "nvmf_init_br2" 00:10:43.711 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@163 -- # true 00:10:43.711 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:10:43.711 Cannot find device "nvmf_tgt_br" 00:10:43.711 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@164 -- # true 00:10:43.711 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:10:43.711 Cannot find device "nvmf_tgt_br2" 00:10:43.711 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@165 -- # true 00:10:43.711 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:10:43.711 Cannot find device "nvmf_init_br" 00:10:43.711 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@166 -- # true 00:10:43.711 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:10:43.711 Cannot find device "nvmf_init_br2" 00:10:43.711 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@167 -- # true 00:10:43.711 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:10:43.711 Cannot find device "nvmf_tgt_br" 00:10:43.711 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@168 -- # true 00:10:43.711 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:10:43.711 Cannot find device "nvmf_tgt_br2" 00:10:43.711 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@169 -- # true 00:10:43.711 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:10:43.711 Cannot find device "nvmf_br" 00:10:43.711 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@170 -- # true 00:10:43.711 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:10:43.711 Cannot find device "nvmf_init_if" 00:10:43.711 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@171 -- # true 00:10:43.711 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:10:43.711 Cannot find device "nvmf_init_if2" 00:10:43.711 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@172 -- # true 00:10:43.711 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:43.711 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:43.711 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@173 -- # true 00:10:43.711 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:43.711 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:43.711 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@174 -- # true 00:10:43.711 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:10:43.711 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:43.711 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:10:43.711 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:43.711 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:43.711 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:43.711 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:43.711 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:43.711 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:10:43.711 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:10:43.711 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:10:43.711 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:10:43.711 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:10:43.711 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:10:43.711 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:10:43.711 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:10:43.711 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:10:43.711 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:43.711 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:43.711 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:43.711 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:10:43.711 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:10:43.711 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:10:43.969 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:10:43.969 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:43.969 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:43.969 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:43.969 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:10:43.969 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:10:43.969 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:10:43.969 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:43.969 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:10:43.969 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:10:43.969 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:43.969 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.066 ms 00:10:43.969 00:10:43.969 --- 10.0.0.3 ping statistics --- 00:10:43.969 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:43.969 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:10:43.969 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:10:43.969 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:10:43.969 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.034 ms 00:10:43.970 00:10:43.970 --- 10.0.0.4 ping statistics --- 00:10:43.970 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:43.970 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:10:43.970 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:43.970 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:43.970 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:10:43.970 00:10:43.970 --- 10.0.0.1 ping statistics --- 00:10:43.970 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:43.970 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:10:43.970 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:10:43.970 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:43.970 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.076 ms 00:10:43.970 00:10:43.970 --- 10.0.0.2 ping statistics --- 00:10:43.970 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:43.970 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:10:43.970 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:43.970 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@457 -- # return 0 00:10:43.970 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:10:43.970 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:43.970 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:10:43.970 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:10:43.970 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:43.970 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:10:43.970 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:10:43.970 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:10:43.970 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:10:43.970 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:43.970 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:43.970 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:43.970 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@505 -- # nvmfpid=77718 00:10:43.970 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@506 -- # waitforlisten 77718 00:10:43.970 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:43.970 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@831 -- # '[' -z 77718 ']' 00:10:43.970 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:43.970 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:43.970 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:43.970 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:43.970 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:43.970 [2024-10-01 06:03:09.494025] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:10:43.970 [2024-10-01 06:03:09.494137] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:44.229 [2024-10-01 06:03:09.633703] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:44.229 [2024-10-01 06:03:09.669555] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:44.229 [2024-10-01 06:03:09.669624] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:44.229 [2024-10-01 06:03:09.669650] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:44.229 [2024-10-01 06:03:09.669658] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:44.229 [2024-10-01 06:03:09.669665] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:44.229 [2024-10-01 06:03:09.670094] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:10:44.229 [2024-10-01 06:03:09.670525] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:10:44.229 [2024-10-01 06:03:09.670605] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:10:44.229 [2024-10-01 06:03:09.670610] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:44.229 [2024-10-01 06:03:09.700472] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:44.229 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:44.229 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # return 0 00:10:44.229 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:10:44.229 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:44.229 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:44.229 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:44.229 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:44.229 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.229 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:44.229 [2024-10-01 06:03:09.805438] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:44.229 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.229 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:44.229 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.229 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:44.229 Malloc0 00:10:44.229 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.229 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:44.229 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.229 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:44.229 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.229 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:44.229 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.229 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:44.488 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.488 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:10:44.488 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.488 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:44.488 [2024-10-01 06:03:09.853882] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:10:44.488 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.488 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:10:44.488 test case1: single bdev can't be used in multiple subsystems 00:10:44.488 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:10:44.488 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.488 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:44.488 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.488 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:10:44.488 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.488 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:44.488 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.488 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:10:44.488 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:10:44.488 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.488 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:44.488 [2024-10-01 06:03:09.877676] bdev.c:8193:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:10:44.488 [2024-10-01 06:03:09.877708] subsystem.c:2157:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:10:44.488 [2024-10-01 06:03:09.877719] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.488 request: 00:10:44.488 { 00:10:44.488 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:10:44.488 "namespace": { 00:10:44.488 "bdev_name": "Malloc0", 00:10:44.488 "no_auto_visible": false 00:10:44.488 }, 00:10:44.488 "method": "nvmf_subsystem_add_ns", 00:10:44.488 "req_id": 1 00:10:44.488 } 00:10:44.488 Got JSON-RPC error response 00:10:44.488 response: 00:10:44.488 { 00:10:44.488 "code": -32602, 00:10:44.488 "message": "Invalid parameters" 00:10:44.488 } 00:10:44.488 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:10:44.488 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:10:44.488 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:10:44.488 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:10:44.488 Adding namespace failed - expected result. 00:10:44.488 test case2: host connect to nvmf target in multiple paths 00:10:44.488 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:10:44.488 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:10:44.488 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.488 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:44.488 [2024-10-01 06:03:09.889796] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:10:44.488 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.488 06:03:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 --hostid=a979a798-a221-4879-b3c4-5aaa753fde06 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:10:44.488 06:03:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 --hostid=a979a798-a221-4879-b3c4-5aaa753fde06 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4421 00:10:44.746 06:03:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:10:44.746 06:03:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:10:44.746 06:03:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:44.746 06:03:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:10:44.746 06:03:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:10:46.645 06:03:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:46.645 06:03:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:46.645 06:03:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:46.645 06:03:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:10:46.645 06:03:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:46.645 06:03:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:10:46.645 06:03:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:46.645 [global] 00:10:46.645 thread=1 00:10:46.645 invalidate=1 00:10:46.645 rw=write 00:10:46.645 time_based=1 00:10:46.645 runtime=1 00:10:46.645 ioengine=libaio 00:10:46.645 direct=1 00:10:46.645 bs=4096 00:10:46.645 iodepth=1 00:10:46.645 norandommap=0 00:10:46.645 numjobs=1 00:10:46.645 00:10:46.645 verify_dump=1 00:10:46.645 verify_backlog=512 00:10:46.645 verify_state_save=0 00:10:46.645 do_verify=1 00:10:46.645 verify=crc32c-intel 00:10:46.645 [job0] 00:10:46.645 filename=/dev/nvme0n1 00:10:46.645 Could not set queue depth (nvme0n1) 00:10:46.903 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:46.903 fio-3.35 00:10:46.903 Starting 1 thread 00:10:48.277 00:10:48.277 job0: (groupid=0, jobs=1): err= 0: pid=77802: Tue Oct 1 06:03:13 2024 00:10:48.277 read: IOPS=2823, BW=11.0MiB/s (11.6MB/s)(11.0MiB/1001msec) 00:10:48.277 slat (nsec): min=12397, max=51629, avg=15374.71, stdev=4395.48 00:10:48.277 clat (usec): min=134, max=308, avg=184.21, stdev=23.85 00:10:48.277 lat (usec): min=147, max=324, avg=199.58, stdev=24.40 00:10:48.277 clat percentiles (usec): 00:10:48.277 | 1.00th=[ 145], 5.00th=[ 153], 10.00th=[ 157], 20.00th=[ 163], 00:10:48.277 | 30.00th=[ 169], 40.00th=[ 176], 50.00th=[ 182], 60.00th=[ 188], 00:10:48.277 | 70.00th=[ 194], 80.00th=[ 202], 90.00th=[ 217], 95.00th=[ 229], 00:10:48.277 | 99.00th=[ 258], 99.50th=[ 265], 99.90th=[ 281], 99.95th=[ 289], 00:10:48.277 | 99.99th=[ 310] 00:10:48.277 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:10:48.277 slat (usec): min=14, max=122, avg=22.39, stdev= 6.47 00:10:48.277 clat (usec): min=84, max=320, avg=116.42, stdev=18.81 00:10:48.277 lat (usec): min=103, max=370, avg=138.81, stdev=20.49 00:10:48.277 clat percentiles (usec): 00:10:48.277 | 1.00th=[ 89], 5.00th=[ 93], 10.00th=[ 96], 20.00th=[ 101], 00:10:48.277 | 30.00th=[ 105], 40.00th=[ 109], 50.00th=[ 113], 60.00th=[ 118], 00:10:48.277 | 70.00th=[ 125], 80.00th=[ 133], 90.00th=[ 141], 95.00th=[ 151], 00:10:48.277 | 99.00th=[ 174], 99.50th=[ 182], 99.90th=[ 212], 99.95th=[ 249], 00:10:48.277 | 99.99th=[ 322] 00:10:48.277 bw ( KiB/s): min=12288, max=12288, per=100.00%, avg=12288.00, stdev= 0.00, samples=1 00:10:48.277 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:10:48.277 lat (usec) : 100=9.51%, 250=89.78%, 500=0.71% 00:10:48.277 cpu : usr=2.10%, sys=8.80%, ctx=5898, majf=0, minf=5 00:10:48.277 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:48.277 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:48.277 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:48.277 issued rwts: total=2826,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:48.277 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:48.277 00:10:48.277 Run status group 0 (all jobs): 00:10:48.277 READ: bw=11.0MiB/s (11.6MB/s), 11.0MiB/s-11.0MiB/s (11.6MB/s-11.6MB/s), io=11.0MiB (11.6MB), run=1001-1001msec 00:10:48.277 WRITE: bw=12.0MiB/s (12.6MB/s), 12.0MiB/s-12.0MiB/s (12.6MB/s-12.6MB/s), io=12.0MiB (12.6MB), run=1001-1001msec 00:10:48.277 00:10:48.277 Disk stats (read/write): 00:10:48.277 nvme0n1: ios=2610/2730, merge=0/0, ticks=520/368, in_queue=888, util=91.38% 00:10:48.277 06:03:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:48.277 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:10:48.277 06:03:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:48.277 06:03:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:10:48.277 06:03:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:48.277 06:03:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:48.277 06:03:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:48.277 06:03:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:48.277 06:03:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:10:48.277 06:03:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:10:48.277 06:03:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:10:48.277 06:03:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # nvmfcleanup 00:10:48.277 06:03:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:10:48.277 06:03:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:48.277 06:03:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:10:48.277 06:03:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:48.277 06:03:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:48.277 rmmod nvme_tcp 00:10:48.277 rmmod nvme_fabrics 00:10:48.277 rmmod nvme_keyring 00:10:48.277 06:03:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:48.277 06:03:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:10:48.277 06:03:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:10:48.277 06:03:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@513 -- # '[' -n 77718 ']' 00:10:48.277 06:03:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@514 -- # killprocess 77718 00:10:48.277 06:03:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@950 -- # '[' -z 77718 ']' 00:10:48.277 06:03:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # kill -0 77718 00:10:48.277 06:03:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # uname 00:10:48.277 06:03:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:48.277 06:03:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 77718 00:10:48.277 killing process with pid 77718 00:10:48.277 06:03:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:48.277 06:03:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:48.277 06:03:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@968 -- # echo 'killing process with pid 77718' 00:10:48.277 06:03:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@969 -- # kill 77718 00:10:48.277 06:03:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@974 -- # wait 77718 00:10:48.535 06:03:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:10:48.535 06:03:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:10:48.535 06:03:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:10:48.535 06:03:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:10:48.535 06:03:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@787 -- # iptables-save 00:10:48.535 06:03:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:10:48.535 06:03:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@787 -- # iptables-restore 00:10:48.535 06:03:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:48.535 06:03:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:10:48.535 06:03:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:10:48.535 06:03:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:10:48.535 06:03:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:10:48.535 06:03:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:10:48.535 06:03:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:10:48.535 06:03:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:10:48.535 06:03:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:10:48.535 06:03:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:10:48.535 06:03:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:10:48.535 06:03:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:10:48.535 06:03:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:10:48.535 06:03:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:48.794 06:03:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:48.794 06:03:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@246 -- # remove_spdk_ns 00:10:48.794 06:03:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:48.794 06:03:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:48.794 06:03:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:48.794 06:03:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@300 -- # return 0 00:10:48.794 00:10:48.794 real 0m5.369s 00:10:48.794 user 0m15.821s 00:10:48.794 sys 0m2.330s 00:10:48.794 06:03:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:48.794 06:03:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:48.794 ************************************ 00:10:48.794 END TEST nvmf_nmic 00:10:48.794 ************************************ 00:10:48.794 06:03:14 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:48.794 06:03:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:48.794 06:03:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:48.794 06:03:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:48.794 ************************************ 00:10:48.794 START TEST nvmf_fio_target 00:10:48.794 ************************************ 00:10:48.794 06:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:48.794 * Looking for test storage... 00:10:48.794 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:48.794 06:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:10:48.794 06:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1681 -- # lcov --version 00:10:48.794 06:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:10:49.054 06:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:10:49.054 06:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:49.054 06:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:49.054 06:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:49.054 06:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:10:49.054 06:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:10:49.054 06:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:10:49.054 06:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:10:49.054 06:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:10:49.054 06:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:10:49.054 06:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:10:49.054 06:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:49.054 06:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:10:49.054 06:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:10:49.054 06:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:49.054 06:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:49.055 06:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:10:49.055 06:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:10:49.055 06:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:49.055 06:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:10:49.055 06:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:10:49.055 06:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:10:49.055 06:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:10:49.055 06:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:49.055 06:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:10:49.055 06:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:10:49.055 06:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:49.055 06:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:49.055 06:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:10:49.055 06:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:49.055 06:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:10:49.055 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:49.055 --rc genhtml_branch_coverage=1 00:10:49.055 --rc genhtml_function_coverage=1 00:10:49.055 --rc genhtml_legend=1 00:10:49.055 --rc geninfo_all_blocks=1 00:10:49.055 --rc geninfo_unexecuted_blocks=1 00:10:49.055 00:10:49.055 ' 00:10:49.055 06:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:10:49.055 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:49.055 --rc genhtml_branch_coverage=1 00:10:49.055 --rc genhtml_function_coverage=1 00:10:49.055 --rc genhtml_legend=1 00:10:49.055 --rc geninfo_all_blocks=1 00:10:49.055 --rc geninfo_unexecuted_blocks=1 00:10:49.055 00:10:49.055 ' 00:10:49.055 06:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:10:49.055 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:49.055 --rc genhtml_branch_coverage=1 00:10:49.055 --rc genhtml_function_coverage=1 00:10:49.055 --rc genhtml_legend=1 00:10:49.055 --rc geninfo_all_blocks=1 00:10:49.055 --rc geninfo_unexecuted_blocks=1 00:10:49.055 00:10:49.055 ' 00:10:49.055 06:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:10:49.055 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:49.055 --rc genhtml_branch_coverage=1 00:10:49.055 --rc genhtml_function_coverage=1 00:10:49.055 --rc genhtml_legend=1 00:10:49.055 --rc geninfo_all_blocks=1 00:10:49.055 --rc geninfo_unexecuted_blocks=1 00:10:49.055 00:10:49.055 ' 00:10:49.055 06:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:49.055 06:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:10:49.055 06:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:49.055 06:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:49.055 06:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:49.055 06:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:49.055 06:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:49.055 06:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:49.055 06:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:49.055 06:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:49.055 06:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:49.055 06:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:49.055 06:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 00:10:49.055 06:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=a979a798-a221-4879-b3c4-5aaa753fde06 00:10:49.055 06:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:49.055 06:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:49.055 06:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:49.055 06:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:49.055 06:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:49.055 06:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:10:49.055 06:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:49.055 06:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:49.055 06:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:49.055 06:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:49.055 06:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:49.055 06:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:49.055 06:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:10:49.055 06:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:49.055 06:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:10:49.055 06:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:49.055 06:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:49.055 06:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:49.055 06:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:49.055 06:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:49.055 06:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:49.055 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:49.055 06:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:49.055 06:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:49.055 06:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:49.056 06:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:49.056 06:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:49.056 06:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:49.056 06:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:10:49.056 06:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:10:49.056 06:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:49.056 06:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@472 -- # prepare_net_devs 00:10:49.056 06:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@434 -- # local -g is_hw=no 00:10:49.056 06:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@436 -- # remove_spdk_ns 00:10:49.056 06:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:49.056 06:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:49.056 06:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:49.056 06:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:10:49.056 06:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:10:49.056 06:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:10:49.056 06:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:10:49.056 06:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:10:49.056 06:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@456 -- # nvmf_veth_init 00:10:49.056 06:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:49.056 06:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:10:49.056 06:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:10:49.056 06:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:10:49.056 06:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:49.056 06:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:10:49.056 06:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:49.056 06:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:10:49.056 06:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:49.056 06:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:10:49.056 06:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:49.056 06:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:49.056 06:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:49.056 06:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:49.056 06:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:49.056 06:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:49.056 06:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:10:49.056 Cannot find device "nvmf_init_br" 00:10:49.056 06:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@162 -- # true 00:10:49.056 06:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:10:49.056 Cannot find device "nvmf_init_br2" 00:10:49.056 06:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@163 -- # true 00:10:49.056 06:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:10:49.056 Cannot find device "nvmf_tgt_br" 00:10:49.056 06:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@164 -- # true 00:10:49.056 06:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:10:49.056 Cannot find device "nvmf_tgt_br2" 00:10:49.056 06:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@165 -- # true 00:10:49.056 06:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:10:49.056 Cannot find device "nvmf_init_br" 00:10:49.056 06:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@166 -- # true 00:10:49.056 06:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:10:49.056 Cannot find device "nvmf_init_br2" 00:10:49.056 06:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@167 -- # true 00:10:49.056 06:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:10:49.056 Cannot find device "nvmf_tgt_br" 00:10:49.056 06:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@168 -- # true 00:10:49.056 06:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:10:49.056 Cannot find device "nvmf_tgt_br2" 00:10:49.056 06:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@169 -- # true 00:10:49.056 06:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:10:49.056 Cannot find device "nvmf_br" 00:10:49.056 06:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@170 -- # true 00:10:49.056 06:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:10:49.056 Cannot find device "nvmf_init_if" 00:10:49.056 06:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@171 -- # true 00:10:49.056 06:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:10:49.056 Cannot find device "nvmf_init_if2" 00:10:49.056 06:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@172 -- # true 00:10:49.056 06:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:49.056 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:49.056 06:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@173 -- # true 00:10:49.056 06:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:49.056 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:49.056 06:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@174 -- # true 00:10:49.056 06:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:10:49.056 06:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:49.056 06:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:10:49.056 06:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:49.329 06:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:49.329 06:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:49.329 06:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:49.329 06:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:49.329 06:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:10:49.329 06:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:10:49.329 06:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:10:49.329 06:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:10:49.329 06:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:10:49.329 06:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:10:49.329 06:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:10:49.329 06:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:10:49.329 06:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:10:49.329 06:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:49.329 06:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:49.329 06:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:49.329 06:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:10:49.329 06:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:10:49.329 06:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:10:49.329 06:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:10:49.329 06:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:49.329 06:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:49.329 06:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:49.329 06:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:10:49.329 06:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:10:49.329 06:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:10:49.330 06:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:49.330 06:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:10:49.330 06:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:10:49.330 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:49.330 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.056 ms 00:10:49.330 00:10:49.330 --- 10.0.0.3 ping statistics --- 00:10:49.330 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:49.330 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:10:49.330 06:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:10:49.330 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:10:49.330 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.051 ms 00:10:49.330 00:10:49.330 --- 10.0.0.4 ping statistics --- 00:10:49.330 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:49.330 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:10:49.330 06:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:49.330 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:49.330 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:10:49.330 00:10:49.330 --- 10.0.0.1 ping statistics --- 00:10:49.330 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:49.330 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:10:49.330 06:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:10:49.330 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:49.330 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.034 ms 00:10:49.330 00:10:49.330 --- 10.0.0.2 ping statistics --- 00:10:49.330 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:49.330 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:10:49.330 06:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:49.330 06:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@457 -- # return 0 00:10:49.330 06:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:10:49.330 06:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:49.330 06:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:10:49.330 06:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:10:49.330 06:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:49.330 06:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:10:49.330 06:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:10:49.330 06:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:10:49.330 06:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:10:49.330 06:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:49.330 06:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:49.330 06:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@505 -- # nvmfpid=78039 00:10:49.330 06:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@506 -- # waitforlisten 78039 00:10:49.330 06:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:49.330 06:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@831 -- # '[' -z 78039 ']' 00:10:49.330 06:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:49.330 06:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:49.330 06:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:49.330 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:49.330 06:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:49.330 06:03:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:49.589 [2024-10-01 06:03:14.948477] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:10:49.589 [2024-10-01 06:03:14.949087] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:49.589 [2024-10-01 06:03:15.087126] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:49.589 [2024-10-01 06:03:15.127894] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:49.589 [2024-10-01 06:03:15.127981] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:49.589 [2024-10-01 06:03:15.127992] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:49.589 [2024-10-01 06:03:15.128000] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:49.589 [2024-10-01 06:03:15.128007] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:49.589 [2024-10-01 06:03:15.128187] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:10:49.589 [2024-10-01 06:03:15.129219] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:10:49.589 [2024-10-01 06:03:15.129345] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:10:49.589 [2024-10-01 06:03:15.129352] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:49.589 [2024-10-01 06:03:15.161532] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:49.847 06:03:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:49.847 06:03:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # return 0 00:10:49.847 06:03:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:10:49.847 06:03:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:49.847 06:03:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:49.847 06:03:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:49.847 06:03:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:50.105 [2024-10-01 06:03:15.562708] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:50.105 06:03:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:50.363 06:03:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:10:50.363 06:03:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:50.621 06:03:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:10:50.621 06:03:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:50.879 06:03:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:10:50.879 06:03:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:51.137 06:03:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:10:51.137 06:03:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:10:51.395 06:03:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:51.962 06:03:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:10:51.962 06:03:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:52.221 06:03:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:10:52.221 06:03:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:52.479 06:03:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:10:52.479 06:03:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:10:52.737 06:03:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:52.995 06:03:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:52.995 06:03:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:53.253 06:03:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:53.253 06:03:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:53.511 06:03:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:10:53.768 [2024-10-01 06:03:19.211122] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:10:53.768 06:03:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:10:54.026 06:03:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:10:54.285 06:03:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 --hostid=a979a798-a221-4879-b3c4-5aaa753fde06 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:10:54.285 06:03:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:10:54.285 06:03:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:10:54.285 06:03:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:54.285 06:03:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:10:54.285 06:03:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:10:54.285 06:03:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:10:56.813 06:03:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:56.813 06:03:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:56.813 06:03:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:56.813 06:03:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:10:56.813 06:03:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:56.813 06:03:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:10:56.813 06:03:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:56.813 [global] 00:10:56.813 thread=1 00:10:56.813 invalidate=1 00:10:56.813 rw=write 00:10:56.813 time_based=1 00:10:56.813 runtime=1 00:10:56.813 ioengine=libaio 00:10:56.813 direct=1 00:10:56.813 bs=4096 00:10:56.813 iodepth=1 00:10:56.813 norandommap=0 00:10:56.813 numjobs=1 00:10:56.813 00:10:56.813 verify_dump=1 00:10:56.813 verify_backlog=512 00:10:56.813 verify_state_save=0 00:10:56.813 do_verify=1 00:10:56.813 verify=crc32c-intel 00:10:56.813 [job0] 00:10:56.813 filename=/dev/nvme0n1 00:10:56.813 [job1] 00:10:56.813 filename=/dev/nvme0n2 00:10:56.813 [job2] 00:10:56.813 filename=/dev/nvme0n3 00:10:56.813 [job3] 00:10:56.813 filename=/dev/nvme0n4 00:10:56.813 Could not set queue depth (nvme0n1) 00:10:56.813 Could not set queue depth (nvme0n2) 00:10:56.813 Could not set queue depth (nvme0n3) 00:10:56.813 Could not set queue depth (nvme0n4) 00:10:56.813 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:56.813 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:56.813 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:56.813 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:56.813 fio-3.35 00:10:56.813 Starting 4 threads 00:10:57.747 00:10:57.747 job0: (groupid=0, jobs=1): err= 0: pid=78222: Tue Oct 1 06:03:23 2024 00:10:57.747 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:10:57.747 slat (nsec): min=12902, max=62491, avg=18354.46, stdev=5537.55 00:10:57.747 clat (usec): min=187, max=4752, avg=304.58, stdev=179.90 00:10:57.747 lat (usec): min=210, max=4775, avg=322.93, stdev=181.07 00:10:57.747 clat percentiles (usec): 00:10:57.747 | 1.00th=[ 233], 5.00th=[ 249], 10.00th=[ 258], 20.00th=[ 265], 00:10:57.747 | 30.00th=[ 269], 40.00th=[ 277], 50.00th=[ 281], 60.00th=[ 289], 00:10:57.747 | 70.00th=[ 293], 80.00th=[ 306], 90.00th=[ 355], 95.00th=[ 461], 00:10:57.747 | 99.00th=[ 523], 99.50th=[ 562], 99.90th=[ 4146], 99.95th=[ 4752], 00:10:57.747 | 99.99th=[ 4752] 00:10:57.747 write: IOPS=2039, BW=8160KiB/s (8356kB/s)(8168KiB/1001msec); 0 zone resets 00:10:57.747 slat (usec): min=18, max=127, avg=26.26, stdev= 7.40 00:10:57.747 clat (usec): min=94, max=3230, avg=216.81, stdev=91.20 00:10:57.747 lat (usec): min=114, max=3250, avg=243.06, stdev=92.42 00:10:57.747 clat percentiles (usec): 00:10:57.747 | 1.00th=[ 117], 5.00th=[ 141], 10.00th=[ 182], 20.00th=[ 192], 00:10:57.747 | 30.00th=[ 198], 40.00th=[ 204], 50.00th=[ 208], 60.00th=[ 215], 00:10:57.747 | 70.00th=[ 223], 80.00th=[ 235], 90.00th=[ 251], 95.00th=[ 318], 00:10:57.747 | 99.00th=[ 359], 99.50th=[ 408], 99.90th=[ 709], 99.95th=[ 2147], 00:10:57.747 | 99.99th=[ 3228] 00:10:57.747 bw ( KiB/s): min= 8192, max= 8192, per=20.28%, avg=8192.00, stdev= 0.00, samples=1 00:10:57.747 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:10:57.747 lat (usec) : 100=0.06%, 250=53.27%, 500=45.44%, 750=1.06% 00:10:57.747 lat (msec) : 2=0.03%, 4=0.08%, 10=0.06% 00:10:57.747 cpu : usr=1.50%, sys=6.60%, ctx=3578, majf=0, minf=11 00:10:57.747 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:57.747 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:57.747 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:57.747 issued rwts: total=1536,2042,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:57.747 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:57.747 job1: (groupid=0, jobs=1): err= 0: pid=78223: Tue Oct 1 06:03:23 2024 00:10:57.747 read: IOPS=2795, BW=10.9MiB/s (11.4MB/s)(10.9MiB/1001msec) 00:10:57.747 slat (nsec): min=12365, max=50095, avg=17167.88, stdev=5439.10 00:10:57.747 clat (usec): min=136, max=776, avg=172.19, stdev=19.37 00:10:57.748 lat (usec): min=150, max=790, avg=189.36, stdev=20.21 00:10:57.748 clat percentiles (usec): 00:10:57.748 | 1.00th=[ 147], 5.00th=[ 151], 10.00th=[ 155], 20.00th=[ 161], 00:10:57.748 | 30.00th=[ 165], 40.00th=[ 167], 50.00th=[ 172], 60.00th=[ 174], 00:10:57.748 | 70.00th=[ 178], 80.00th=[ 184], 90.00th=[ 190], 95.00th=[ 196], 00:10:57.748 | 99.00th=[ 212], 99.50th=[ 219], 99.90th=[ 334], 99.95th=[ 515], 00:10:57.748 | 99.99th=[ 775] 00:10:57.748 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:10:57.748 slat (usec): min=17, max=119, avg=23.01, stdev= 6.14 00:10:57.748 clat (usec): min=96, max=247, avg=126.34, stdev=12.17 00:10:57.748 lat (usec): min=117, max=366, avg=149.35, stdev=13.82 00:10:57.748 clat percentiles (usec): 00:10:57.748 | 1.00th=[ 103], 5.00th=[ 110], 10.00th=[ 113], 20.00th=[ 117], 00:10:57.748 | 30.00th=[ 120], 40.00th=[ 123], 50.00th=[ 125], 60.00th=[ 128], 00:10:57.748 | 70.00th=[ 133], 80.00th=[ 137], 90.00th=[ 143], 95.00th=[ 149], 00:10:57.748 | 99.00th=[ 159], 99.50th=[ 165], 99.90th=[ 176], 99.95th=[ 186], 00:10:57.748 | 99.99th=[ 247] 00:10:57.748 bw ( KiB/s): min=12288, max=12288, per=30.43%, avg=12288.00, stdev= 0.00, samples=1 00:10:57.748 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:10:57.748 lat (usec) : 100=0.15%, 250=99.74%, 500=0.07%, 750=0.02%, 1000=0.02% 00:10:57.748 cpu : usr=2.50%, sys=9.40%, ctx=5871, majf=0, minf=9 00:10:57.748 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:57.748 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:57.748 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:57.748 issued rwts: total=2798,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:57.748 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:57.748 job2: (groupid=0, jobs=1): err= 0: pid=78224: Tue Oct 1 06:03:23 2024 00:10:57.748 read: IOPS=1796, BW=7185KiB/s (7357kB/s)(7192KiB/1001msec) 00:10:57.748 slat (nsec): min=12948, max=49813, avg=16379.71, stdev=3470.78 00:10:57.748 clat (usec): min=154, max=697, avg=282.34, stdev=44.59 00:10:57.748 lat (usec): min=168, max=712, avg=298.72, stdev=44.94 00:10:57.748 clat percentiles (usec): 00:10:57.748 | 1.00th=[ 174], 5.00th=[ 237], 10.00th=[ 249], 20.00th=[ 260], 00:10:57.748 | 30.00th=[ 265], 40.00th=[ 273], 50.00th=[ 277], 60.00th=[ 281], 00:10:57.748 | 70.00th=[ 289], 80.00th=[ 302], 90.00th=[ 322], 95.00th=[ 359], 00:10:57.748 | 99.00th=[ 461], 99.50th=[ 506], 99.90th=[ 668], 99.95th=[ 701], 00:10:57.748 | 99.99th=[ 701] 00:10:57.748 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:10:57.748 slat (usec): min=17, max=109, avg=24.08, stdev= 6.05 00:10:57.748 clat (usec): min=106, max=2028, avg=198.49, stdev=64.56 00:10:57.748 lat (usec): min=126, max=2073, avg=222.57, stdev=65.60 00:10:57.748 clat percentiles (usec): 00:10:57.748 | 1.00th=[ 114], 5.00th=[ 127], 10.00th=[ 139], 20.00th=[ 180], 00:10:57.748 | 30.00th=[ 190], 40.00th=[ 196], 50.00th=[ 202], 60.00th=[ 208], 00:10:57.748 | 70.00th=[ 212], 80.00th=[ 219], 90.00th=[ 233], 95.00th=[ 243], 00:10:57.748 | 99.00th=[ 265], 99.50th=[ 306], 99.90th=[ 1012], 99.95th=[ 1631], 00:10:57.748 | 99.99th=[ 2024] 00:10:57.748 bw ( KiB/s): min= 8192, max= 8192, per=20.28%, avg=8192.00, stdev= 0.00, samples=1 00:10:57.748 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:10:57.748 lat (usec) : 250=56.63%, 500=42.98%, 750=0.31% 00:10:57.748 lat (msec) : 2=0.05%, 4=0.03% 00:10:57.748 cpu : usr=1.50%, sys=6.30%, ctx=3846, majf=0, minf=11 00:10:57.748 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:57.748 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:57.748 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:57.748 issued rwts: total=1798,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:57.748 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:57.748 job3: (groupid=0, jobs=1): err= 0: pid=78225: Tue Oct 1 06:03:23 2024 00:10:57.748 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:10:57.748 slat (nsec): min=12557, max=53757, avg=17024.99, stdev=5013.37 00:10:57.748 clat (usec): min=149, max=249, avg=183.25, stdev=14.45 00:10:57.748 lat (usec): min=163, max=265, avg=200.27, stdev=15.82 00:10:57.748 clat percentiles (usec): 00:10:57.748 | 1.00th=[ 157], 5.00th=[ 163], 10.00th=[ 165], 20.00th=[ 172], 00:10:57.748 | 30.00th=[ 176], 40.00th=[ 180], 50.00th=[ 182], 60.00th=[ 186], 00:10:57.748 | 70.00th=[ 190], 80.00th=[ 196], 90.00th=[ 204], 95.00th=[ 210], 00:10:57.748 | 99.00th=[ 223], 99.50th=[ 229], 99.90th=[ 243], 99.95th=[ 245], 00:10:57.748 | 99.99th=[ 251] 00:10:57.748 write: IOPS=2942, BW=11.5MiB/s (12.1MB/s)(11.5MiB/1001msec); 0 zone resets 00:10:57.748 slat (usec): min=15, max=137, avg=24.49, stdev= 7.50 00:10:57.748 clat (usec): min=98, max=1642, avg=137.15, stdev=31.93 00:10:57.748 lat (usec): min=118, max=1660, avg=161.64, stdev=32.89 00:10:57.748 clat percentiles (usec): 00:10:57.748 | 1.00th=[ 110], 5.00th=[ 117], 10.00th=[ 121], 20.00th=[ 125], 00:10:57.748 | 30.00th=[ 129], 40.00th=[ 133], 50.00th=[ 137], 60.00th=[ 139], 00:10:57.748 | 70.00th=[ 143], 80.00th=[ 149], 90.00th=[ 155], 95.00th=[ 161], 00:10:57.748 | 99.00th=[ 176], 99.50th=[ 182], 99.90th=[ 235], 99.95th=[ 529], 00:10:57.748 | 99.99th=[ 1647] 00:10:57.748 bw ( KiB/s): min=12288, max=12288, per=30.43%, avg=12288.00, stdev= 0.00, samples=1 00:10:57.748 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:10:57.748 lat (usec) : 100=0.05%, 250=99.91%, 750=0.02% 00:10:57.748 lat (msec) : 2=0.02% 00:10:57.748 cpu : usr=2.40%, sys=9.20%, ctx=5519, majf=0, minf=5 00:10:57.748 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:57.748 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:57.748 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:57.748 issued rwts: total=2560,2945,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:57.748 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:57.748 00:10:57.748 Run status group 0 (all jobs): 00:10:57.748 READ: bw=33.9MiB/s (35.6MB/s), 6138KiB/s-10.9MiB/s (6285kB/s-11.4MB/s), io=34.0MiB (35.6MB), run=1001-1001msec 00:10:57.748 WRITE: bw=39.4MiB/s (41.4MB/s), 8160KiB/s-12.0MiB/s (8356kB/s-12.6MB/s), io=39.5MiB (41.4MB), run=1001-1001msec 00:10:57.748 00:10:57.748 Disk stats (read/write): 00:10:57.748 nvme0n1: ios=1581/1536, merge=0/0, ticks=502/331, in_queue=833, util=87.68% 00:10:57.748 nvme0n2: ios=2509/2560, merge=0/0, ticks=462/344, in_queue=806, util=88.66% 00:10:57.748 nvme0n3: ios=1536/1774, merge=0/0, ticks=433/372, in_queue=805, util=89.23% 00:10:57.748 nvme0n4: ios=2168/2560, merge=0/0, ticks=410/370, in_queue=780, util=89.80% 00:10:57.748 06:03:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:10:57.748 [global] 00:10:57.748 thread=1 00:10:57.748 invalidate=1 00:10:57.748 rw=randwrite 00:10:57.748 time_based=1 00:10:57.748 runtime=1 00:10:57.748 ioengine=libaio 00:10:57.748 direct=1 00:10:57.748 bs=4096 00:10:57.748 iodepth=1 00:10:57.748 norandommap=0 00:10:57.748 numjobs=1 00:10:57.748 00:10:57.748 verify_dump=1 00:10:57.748 verify_backlog=512 00:10:57.748 verify_state_save=0 00:10:57.748 do_verify=1 00:10:57.748 verify=crc32c-intel 00:10:57.748 [job0] 00:10:57.748 filename=/dev/nvme0n1 00:10:57.748 [job1] 00:10:57.748 filename=/dev/nvme0n2 00:10:57.748 [job2] 00:10:57.748 filename=/dev/nvme0n3 00:10:57.748 [job3] 00:10:57.748 filename=/dev/nvme0n4 00:10:57.748 Could not set queue depth (nvme0n1) 00:10:57.748 Could not set queue depth (nvme0n2) 00:10:57.748 Could not set queue depth (nvme0n3) 00:10:57.748 Could not set queue depth (nvme0n4) 00:10:58.007 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:58.007 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:58.007 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:58.007 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:58.007 fio-3.35 00:10:58.007 Starting 4 threads 00:10:59.378 00:10:59.378 job0: (groupid=0, jobs=1): err= 0: pid=78278: Tue Oct 1 06:03:24 2024 00:10:59.378 read: IOPS=2904, BW=11.3MiB/s (11.9MB/s)(11.4MiB/1001msec) 00:10:59.378 slat (nsec): min=11965, max=38704, avg=14754.88, stdev=2520.07 00:10:59.378 clat (usec): min=135, max=2670, avg=170.09, stdev=49.34 00:10:59.378 lat (usec): min=149, max=2688, avg=184.85, stdev=49.46 00:10:59.378 clat percentiles (usec): 00:10:59.378 | 1.00th=[ 143], 5.00th=[ 151], 10.00th=[ 155], 20.00th=[ 159], 00:10:59.378 | 30.00th=[ 163], 40.00th=[ 165], 50.00th=[ 169], 60.00th=[ 172], 00:10:59.378 | 70.00th=[ 176], 80.00th=[ 180], 90.00th=[ 186], 95.00th=[ 192], 00:10:59.378 | 99.00th=[ 204], 99.50th=[ 210], 99.90th=[ 225], 99.95th=[ 775], 00:10:59.378 | 99.99th=[ 2671] 00:10:59.378 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:10:59.378 slat (usec): min=15, max=109, avg=20.95, stdev= 4.04 00:10:59.378 clat (usec): min=97, max=257, avg=126.20, stdev=11.70 00:10:59.378 lat (usec): min=116, max=366, avg=147.15, stdev=12.46 00:10:59.378 clat percentiles (usec): 00:10:59.378 | 1.00th=[ 105], 5.00th=[ 111], 10.00th=[ 114], 20.00th=[ 118], 00:10:59.378 | 30.00th=[ 121], 40.00th=[ 123], 50.00th=[ 126], 60.00th=[ 128], 00:10:59.378 | 70.00th=[ 131], 80.00th=[ 135], 90.00th=[ 141], 95.00th=[ 147], 00:10:59.378 | 99.00th=[ 159], 99.50th=[ 167], 99.90th=[ 208], 99.95th=[ 245], 00:10:59.378 | 99.99th=[ 258] 00:10:59.378 bw ( KiB/s): min=12288, max=12288, per=30.03%, avg=12288.00, stdev= 0.00, samples=1 00:10:59.378 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:10:59.378 lat (usec) : 100=0.07%, 250=99.88%, 500=0.02%, 1000=0.02% 00:10:59.378 lat (msec) : 4=0.02% 00:10:59.378 cpu : usr=1.90%, sys=9.00%, ctx=5980, majf=0, minf=7 00:10:59.378 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:59.378 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:59.378 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:59.378 issued rwts: total=2907,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:59.378 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:59.378 job1: (groupid=0, jobs=1): err= 0: pid=78279: Tue Oct 1 06:03:24 2024 00:10:59.378 read: IOPS=1977, BW=7908KiB/s (8098kB/s)(7916KiB/1001msec) 00:10:59.378 slat (nsec): min=8736, max=41675, avg=15208.34, stdev=2501.56 00:10:59.378 clat (usec): min=219, max=502, avg=255.63, stdev=14.92 00:10:59.378 lat (usec): min=233, max=526, avg=270.84, stdev=15.20 00:10:59.378 clat percentiles (usec): 00:10:59.378 | 1.00th=[ 227], 5.00th=[ 235], 10.00th=[ 239], 20.00th=[ 243], 00:10:59.378 | 30.00th=[ 247], 40.00th=[ 251], 50.00th=[ 255], 60.00th=[ 260], 00:10:59.378 | 70.00th=[ 265], 80.00th=[ 269], 90.00th=[ 273], 95.00th=[ 281], 00:10:59.378 | 99.00th=[ 293], 99.50th=[ 302], 99.90th=[ 318], 99.95th=[ 502], 00:10:59.378 | 99.99th=[ 502] 00:10:59.378 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:10:59.378 slat (usec): min=15, max=108, avg=22.23, stdev= 5.35 00:10:59.378 clat (usec): min=163, max=308, avg=200.88, stdev=17.18 00:10:59.378 lat (usec): min=182, max=345, avg=223.11, stdev=19.09 00:10:59.378 clat percentiles (usec): 00:10:59.378 | 1.00th=[ 167], 5.00th=[ 176], 10.00th=[ 182], 20.00th=[ 186], 00:10:59.378 | 30.00th=[ 192], 40.00th=[ 196], 50.00th=[ 200], 60.00th=[ 204], 00:10:59.378 | 70.00th=[ 208], 80.00th=[ 215], 90.00th=[ 223], 95.00th=[ 231], 00:10:59.378 | 99.00th=[ 247], 99.50th=[ 265], 99.90th=[ 281], 99.95th=[ 285], 00:10:59.378 | 99.99th=[ 310] 00:10:59.378 bw ( KiB/s): min= 8192, max= 8192, per=20.02%, avg=8192.00, stdev= 0.00, samples=1 00:10:59.378 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:10:59.378 lat (usec) : 250=68.21%, 500=31.76%, 750=0.02% 00:10:59.378 cpu : usr=1.20%, sys=7.00%, ctx=4027, majf=0, minf=17 00:10:59.378 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:59.378 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:59.378 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:59.378 issued rwts: total=1979,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:59.378 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:59.378 job2: (groupid=0, jobs=1): err= 0: pid=78280: Tue Oct 1 06:03:24 2024 00:10:59.378 read: IOPS=2725, BW=10.6MiB/s (11.2MB/s)(10.7MiB/1001msec) 00:10:59.378 slat (nsec): min=12179, max=39196, avg=13997.90, stdev=1909.10 00:10:59.378 clat (usec): min=146, max=526, avg=174.62, stdev=14.22 00:10:59.378 lat (usec): min=158, max=539, avg=188.62, stdev=14.51 00:10:59.378 clat percentiles (usec): 00:10:59.378 | 1.00th=[ 151], 5.00th=[ 157], 10.00th=[ 159], 20.00th=[ 163], 00:10:59.378 | 30.00th=[ 167], 40.00th=[ 169], 50.00th=[ 174], 60.00th=[ 178], 00:10:59.378 | 70.00th=[ 182], 80.00th=[ 186], 90.00th=[ 192], 95.00th=[ 198], 00:10:59.378 | 99.00th=[ 208], 99.50th=[ 212], 99.90th=[ 225], 99.95th=[ 227], 00:10:59.379 | 99.99th=[ 529] 00:10:59.379 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:10:59.379 slat (usec): min=14, max=105, avg=20.67, stdev= 4.03 00:10:59.379 clat (usec): min=100, max=294, avg=133.94, stdev=12.57 00:10:59.379 lat (usec): min=123, max=355, avg=154.61, stdev=13.60 00:10:59.379 clat percentiles (usec): 00:10:59.379 | 1.00th=[ 111], 5.00th=[ 115], 10.00th=[ 119], 20.00th=[ 124], 00:10:59.379 | 30.00th=[ 128], 40.00th=[ 131], 50.00th=[ 135], 60.00th=[ 137], 00:10:59.379 | 70.00th=[ 141], 80.00th=[ 145], 90.00th=[ 151], 95.00th=[ 155], 00:10:59.379 | 99.00th=[ 165], 99.50th=[ 169], 99.90th=[ 182], 99.95th=[ 249], 00:10:59.379 | 99.99th=[ 297] 00:10:59.379 bw ( KiB/s): min=12288, max=12288, per=30.03%, avg=12288.00, stdev= 0.00, samples=1 00:10:59.379 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:10:59.379 lat (usec) : 250=99.97%, 500=0.02%, 750=0.02% 00:10:59.379 cpu : usr=2.90%, sys=7.60%, ctx=5803, majf=0, minf=9 00:10:59.379 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:59.379 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:59.379 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:59.379 issued rwts: total=2728,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:59.379 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:59.379 job3: (groupid=0, jobs=1): err= 0: pid=78281: Tue Oct 1 06:03:24 2024 00:10:59.379 read: IOPS=1977, BW=7908KiB/s (8098kB/s)(7916KiB/1001msec) 00:10:59.379 slat (nsec): min=9130, max=28489, avg=10867.14, stdev=1939.39 00:10:59.379 clat (usec): min=186, max=735, avg=260.30, stdev=17.94 00:10:59.379 lat (usec): min=209, max=746, avg=271.17, stdev=17.96 00:10:59.379 clat percentiles (usec): 00:10:59.379 | 1.00th=[ 231], 5.00th=[ 239], 10.00th=[ 243], 20.00th=[ 249], 00:10:59.379 | 30.00th=[ 253], 40.00th=[ 255], 50.00th=[ 260], 60.00th=[ 265], 00:10:59.379 | 70.00th=[ 269], 80.00th=[ 273], 90.00th=[ 281], 95.00th=[ 285], 00:10:59.379 | 99.00th=[ 297], 99.50th=[ 306], 99.90th=[ 326], 99.95th=[ 734], 00:10:59.379 | 99.99th=[ 734] 00:10:59.379 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:10:59.379 slat (nsec): min=11406, max=77360, avg=17243.72, stdev=6014.52 00:10:59.379 clat (usec): min=115, max=356, avg=206.24, stdev=18.01 00:10:59.379 lat (usec): min=182, max=396, avg=223.48, stdev=19.46 00:10:59.379 clat percentiles (usec): 00:10:59.379 | 1.00th=[ 174], 5.00th=[ 182], 10.00th=[ 186], 20.00th=[ 192], 00:10:59.379 | 30.00th=[ 196], 40.00th=[ 200], 50.00th=[ 206], 60.00th=[ 210], 00:10:59.379 | 70.00th=[ 215], 80.00th=[ 221], 90.00th=[ 229], 95.00th=[ 237], 00:10:59.379 | 99.00th=[ 255], 99.50th=[ 269], 99.90th=[ 314], 99.95th=[ 314], 00:10:59.379 | 99.99th=[ 359] 00:10:59.379 bw ( KiB/s): min= 8192, max= 8192, per=20.02%, avg=8192.00, stdev= 0.00, samples=1 00:10:59.379 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:10:59.379 lat (usec) : 250=62.06%, 500=37.92%, 750=0.02% 00:10:59.379 cpu : usr=1.50%, sys=4.70%, ctx=4027, majf=0, minf=13 00:10:59.379 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:59.379 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:59.379 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:59.379 issued rwts: total=1979,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:59.379 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:59.379 00:10:59.379 Run status group 0 (all jobs): 00:10:59.379 READ: bw=37.4MiB/s (39.3MB/s), 7908KiB/s-11.3MiB/s (8098kB/s-11.9MB/s), io=37.5MiB (39.3MB), run=1001-1001msec 00:10:59.379 WRITE: bw=40.0MiB/s (41.9MB/s), 8184KiB/s-12.0MiB/s (8380kB/s-12.6MB/s), io=40.0MiB (41.9MB), run=1001-1001msec 00:10:59.379 00:10:59.379 Disk stats (read/write): 00:10:59.379 nvme0n1: ios=2610/2572, merge=0/0, ticks=475/348, in_queue=823, util=88.78% 00:10:59.379 nvme0n2: ios=1585/1981, merge=0/0, ticks=426/411, in_queue=837, util=88.50% 00:10:59.379 nvme0n3: ios=2448/2560, merge=0/0, ticks=475/367, in_queue=842, util=89.38% 00:10:59.379 nvme0n4: ios=1536/1981, merge=0/0, ticks=365/350, in_queue=715, util=89.81% 00:10:59.379 06:03:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:10:59.379 [global] 00:10:59.379 thread=1 00:10:59.379 invalidate=1 00:10:59.379 rw=write 00:10:59.379 time_based=1 00:10:59.379 runtime=1 00:10:59.379 ioengine=libaio 00:10:59.379 direct=1 00:10:59.379 bs=4096 00:10:59.379 iodepth=128 00:10:59.379 norandommap=0 00:10:59.379 numjobs=1 00:10:59.379 00:10:59.379 verify_dump=1 00:10:59.379 verify_backlog=512 00:10:59.379 verify_state_save=0 00:10:59.379 do_verify=1 00:10:59.379 verify=crc32c-intel 00:10:59.379 [job0] 00:10:59.379 filename=/dev/nvme0n1 00:10:59.379 [job1] 00:10:59.379 filename=/dev/nvme0n2 00:10:59.379 [job2] 00:10:59.379 filename=/dev/nvme0n3 00:10:59.379 [job3] 00:10:59.379 filename=/dev/nvme0n4 00:10:59.379 Could not set queue depth (nvme0n1) 00:10:59.379 Could not set queue depth (nvme0n2) 00:10:59.379 Could not set queue depth (nvme0n3) 00:10:59.379 Could not set queue depth (nvme0n4) 00:10:59.379 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:59.379 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:59.379 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:59.379 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:59.379 fio-3.35 00:10:59.379 Starting 4 threads 00:11:00.754 00:11:00.754 job0: (groupid=0, jobs=1): err= 0: pid=78334: Tue Oct 1 06:03:25 2024 00:11:00.754 read: IOPS=4940, BW=19.3MiB/s (20.2MB/s)(19.3MiB/1002msec) 00:11:00.754 slat (usec): min=4, max=3591, avg=96.86, stdev=386.57 00:11:00.754 clat (usec): min=1046, max=16511, avg=12812.70, stdev=1274.39 00:11:00.754 lat (usec): min=1060, max=16534, avg=12909.56, stdev=1308.37 00:11:00.754 clat percentiles (usec): 00:11:00.754 | 1.00th=[ 7046], 5.00th=[11338], 10.00th=[11994], 20.00th=[12518], 00:11:00.754 | 30.00th=[12649], 40.00th=[12780], 50.00th=[12911], 60.00th=[12911], 00:11:00.754 | 70.00th=[13042], 80.00th=[13304], 90.00th=[14222], 95.00th=[14615], 00:11:00.754 | 99.00th=[15139], 99.50th=[15139], 99.90th=[15926], 99.95th=[15926], 00:11:00.754 | 99.99th=[16450] 00:11:00.754 write: IOPS=5109, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1002msec); 0 zone resets 00:11:00.754 slat (usec): min=10, max=6021, avg=94.01, stdev=453.47 00:11:00.754 clat (usec): min=9918, max=17530, avg=12344.57, stdev=898.08 00:11:00.754 lat (usec): min=9949, max=17563, avg=12438.58, stdev=993.80 00:11:00.754 clat percentiles (usec): 00:11:00.754 | 1.00th=[10290], 5.00th=[11469], 10.00th=[11731], 20.00th=[11863], 00:11:00.754 | 30.00th=[11994], 40.00th=[12125], 50.00th=[12256], 60.00th=[12256], 00:11:00.754 | 70.00th=[12387], 80.00th=[12518], 90.00th=[13042], 95.00th=[14615], 00:11:00.754 | 99.00th=[15401], 99.50th=[15533], 99.90th=[17171], 99.95th=[17433], 00:11:00.754 | 99.99th=[17433] 00:11:00.754 bw ( KiB/s): min=20480, max=20480, per=26.42%, avg=20480.00, stdev= 0.00, samples=1 00:11:00.754 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=1 00:11:00.754 lat (msec) : 2=0.06%, 4=0.20%, 10=0.76%, 20=98.98% 00:11:00.754 cpu : usr=5.39%, sys=12.99%, ctx=358, majf=0, minf=11 00:11:00.754 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:11:00.754 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:00.754 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:00.754 issued rwts: total=4950,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:00.754 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:00.754 job1: (groupid=0, jobs=1): err= 0: pid=78335: Tue Oct 1 06:03:25 2024 00:11:00.754 read: IOPS=5109, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1001msec) 00:11:00.754 slat (usec): min=5, max=5339, avg=97.61, stdev=472.06 00:11:00.754 clat (usec): min=514, max=18018, avg=12755.78, stdev=1234.97 00:11:00.754 lat (usec): min=5314, max=18052, avg=12853.40, stdev=1251.97 00:11:00.754 clat percentiles (usec): 00:11:00.754 | 1.00th=[ 6456], 5.00th=[11207], 10.00th=[11731], 20.00th=[12387], 00:11:00.754 | 30.00th=[12518], 40.00th=[12649], 50.00th=[12780], 60.00th=[12911], 00:11:00.754 | 70.00th=[13042], 80.00th=[13304], 90.00th=[13698], 95.00th=[14353], 00:11:00.754 | 99.00th=[16057], 99.50th=[16319], 99.90th=[17433], 99.95th=[17695], 00:11:00.754 | 99.99th=[17957] 00:11:00.754 write: IOPS=5114, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1001msec); 0 zone resets 00:11:00.754 slat (usec): min=8, max=5209, avg=89.71, stdev=521.23 00:11:00.754 clat (usec): min=6752, max=17264, avg=11986.77, stdev=1038.98 00:11:00.754 lat (usec): min=6809, max=17284, avg=12076.48, stdev=1148.03 00:11:00.754 clat percentiles (usec): 00:11:00.754 | 1.00th=[ 8291], 5.00th=[10552], 10.00th=[11076], 20.00th=[11600], 00:11:00.754 | 30.00th=[11731], 40.00th=[11863], 50.00th=[11994], 60.00th=[12125], 00:11:00.754 | 70.00th=[12256], 80.00th=[12387], 90.00th=[12649], 95.00th=[13042], 00:11:00.754 | 99.00th=[16188], 99.50th=[16581], 99.90th=[17171], 99.95th=[17171], 00:11:00.754 | 99.99th=[17171] 00:11:00.754 bw ( KiB/s): min=20480, max=20521, per=26.45%, avg=20500.50, stdev=28.99, samples=2 00:11:00.754 iops : min= 5120, max= 5130, avg=5125.00, stdev= 7.07, samples=2 00:11:00.754 lat (usec) : 750=0.01% 00:11:00.754 lat (msec) : 10=2.66%, 20=97.33% 00:11:00.754 cpu : usr=4.40%, sys=14.59%, ctx=296, majf=0, minf=11 00:11:00.754 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:11:00.754 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:00.754 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:00.754 issued rwts: total=5115,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:00.754 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:00.754 job2: (groupid=0, jobs=1): err= 0: pid=78336: Tue Oct 1 06:03:25 2024 00:11:00.754 read: IOPS=4160, BW=16.3MiB/s (17.0MB/s)(16.3MiB/1002msec) 00:11:00.754 slat (usec): min=4, max=4431, avg=112.00, stdev=447.43 00:11:00.754 clat (usec): min=471, max=18943, avg=14600.80, stdev=1627.44 00:11:00.754 lat (usec): min=3108, max=19010, avg=14712.80, stdev=1662.76 00:11:00.754 clat percentiles (usec): 00:11:00.754 | 1.00th=[ 8160], 5.00th=[12256], 10.00th=[13304], 20.00th=[14222], 00:11:00.754 | 30.00th=[14615], 40.00th=[14615], 50.00th=[14746], 60.00th=[14746], 00:11:00.754 | 70.00th=[14877], 80.00th=[15008], 90.00th=[16450], 95.00th=[16909], 00:11:00.754 | 99.00th=[17957], 99.50th=[17957], 99.90th=[18744], 99.95th=[18744], 00:11:00.754 | 99.99th=[19006] 00:11:00.754 write: IOPS=4598, BW=18.0MiB/s (18.8MB/s)(18.0MiB/1002msec); 0 zone resets 00:11:00.754 slat (usec): min=10, max=4207, avg=107.79, stdev=482.85 00:11:00.754 clat (usec): min=8573, max=18882, avg=14258.79, stdev=1145.46 00:11:00.754 lat (usec): min=8591, max=18909, avg=14366.58, stdev=1225.20 00:11:00.754 clat percentiles (usec): 00:11:00.754 | 1.00th=[11731], 5.00th=[12780], 10.00th=[13173], 20.00th=[13566], 00:11:00.754 | 30.00th=[13698], 40.00th=[13960], 50.00th=[14091], 60.00th=[14222], 00:11:00.755 | 70.00th=[14484], 80.00th=[14877], 90.00th=[15401], 95.00th=[16909], 00:11:00.755 | 99.00th=[18220], 99.50th=[18482], 99.90th=[18744], 99.95th=[18744], 00:11:00.755 | 99.99th=[19006] 00:11:00.755 bw ( KiB/s): min=18140, max=18320, per=23.52%, avg=18230.00, stdev=127.28, samples=2 00:11:00.755 iops : min= 4535, max= 4580, avg=4557.50, stdev=31.82, samples=2 00:11:00.755 lat (usec) : 500=0.01% 00:11:00.755 lat (msec) : 4=0.34%, 10=0.52%, 20=99.12% 00:11:00.755 cpu : usr=3.80%, sys=13.79%, ctx=417, majf=0, minf=15 00:11:00.755 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:11:00.755 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:00.755 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:00.755 issued rwts: total=4169,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:00.755 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:00.755 job3: (groupid=0, jobs=1): err= 0: pid=78337: Tue Oct 1 06:03:25 2024 00:11:00.755 read: IOPS=4208, BW=16.4MiB/s (17.2MB/s)(16.5MiB/1004msec) 00:11:00.755 slat (usec): min=5, max=3541, avg=110.74, stdev=527.50 00:11:00.755 clat (usec): min=1404, max=15772, avg=14517.77, stdev=1216.98 00:11:00.755 lat (usec): min=4446, max=15799, avg=14628.52, stdev=1098.65 00:11:00.755 clat percentiles (usec): 00:11:00.755 | 1.00th=[ 8356], 5.00th=[12125], 10.00th=[14353], 20.00th=[14484], 00:11:00.755 | 30.00th=[14484], 40.00th=[14615], 50.00th=[14746], 60.00th=[14746], 00:11:00.755 | 70.00th=[14877], 80.00th=[15008], 90.00th=[15139], 95.00th=[15270], 00:11:00.755 | 99.00th=[15533], 99.50th=[15664], 99.90th=[15795], 99.95th=[15795], 00:11:00.755 | 99.99th=[15795] 00:11:00.755 write: IOPS=4589, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1004msec); 0 zone resets 00:11:00.755 slat (usec): min=10, max=3637, avg=108.09, stdev=468.84 00:11:00.755 clat (usec): min=10940, max=15792, avg=14180.73, stdev=629.02 00:11:00.755 lat (usec): min=11485, max=15822, avg=14288.82, stdev=419.50 00:11:00.755 clat percentiles (usec): 00:11:00.755 | 1.00th=[11338], 5.00th=[13698], 10.00th=[13829], 20.00th=[13960], 00:11:00.755 | 30.00th=[13960], 40.00th=[14091], 50.00th=[14222], 60.00th=[14222], 00:11:00.755 | 70.00th=[14353], 80.00th=[14484], 90.00th=[14746], 95.00th=[15270], 00:11:00.755 | 99.00th=[15664], 99.50th=[15795], 99.90th=[15795], 99.95th=[15795], 00:11:00.755 | 99.99th=[15795] 00:11:00.755 bw ( KiB/s): min=17963, max=18936, per=23.80%, avg=18449.50, stdev=688.01, samples=2 00:11:00.755 iops : min= 4490, max= 4734, avg=4612.00, stdev=172.53, samples=2 00:11:00.755 lat (msec) : 2=0.01%, 10=0.72%, 20=99.26% 00:11:00.755 cpu : usr=3.89%, sys=13.26%, ctx=278, majf=0, minf=15 00:11:00.755 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:11:00.755 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:00.755 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:00.755 issued rwts: total=4225,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:00.755 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:00.755 00:11:00.755 Run status group 0 (all jobs): 00:11:00.755 READ: bw=71.8MiB/s (75.3MB/s), 16.3MiB/s-20.0MiB/s (17.0MB/s-20.9MB/s), io=72.1MiB (75.6MB), run=1001-1004msec 00:11:00.755 WRITE: bw=75.7MiB/s (79.4MB/s), 17.9MiB/s-20.0MiB/s (18.8MB/s-20.9MB/s), io=76.0MiB (79.7MB), run=1001-1004msec 00:11:00.755 00:11:00.755 Disk stats (read/write): 00:11:00.755 nvme0n1: ios=4186/4608, merge=0/0, ticks=16977/15731, in_queue=32708, util=88.38% 00:11:00.755 nvme0n2: ios=4280/4608, merge=0/0, ticks=26116/22723, in_queue=48839, util=89.30% 00:11:00.755 nvme0n3: ios=3584/4007, merge=0/0, ticks=16839/16429, in_queue=33268, util=89.31% 00:11:00.755 nvme0n4: ios=3584/4064, merge=0/0, ticks=11994/12322, in_queue=24316, util=89.78% 00:11:00.755 06:03:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:11:00.755 [global] 00:11:00.755 thread=1 00:11:00.755 invalidate=1 00:11:00.755 rw=randwrite 00:11:00.755 time_based=1 00:11:00.755 runtime=1 00:11:00.755 ioengine=libaio 00:11:00.755 direct=1 00:11:00.755 bs=4096 00:11:00.755 iodepth=128 00:11:00.755 norandommap=0 00:11:00.755 numjobs=1 00:11:00.755 00:11:00.755 verify_dump=1 00:11:00.755 verify_backlog=512 00:11:00.755 verify_state_save=0 00:11:00.755 do_verify=1 00:11:00.755 verify=crc32c-intel 00:11:00.755 [job0] 00:11:00.755 filename=/dev/nvme0n1 00:11:00.755 [job1] 00:11:00.755 filename=/dev/nvme0n2 00:11:00.755 [job2] 00:11:00.755 filename=/dev/nvme0n3 00:11:00.755 [job3] 00:11:00.755 filename=/dev/nvme0n4 00:11:00.755 Could not set queue depth (nvme0n1) 00:11:00.755 Could not set queue depth (nvme0n2) 00:11:00.755 Could not set queue depth (nvme0n3) 00:11:00.755 Could not set queue depth (nvme0n4) 00:11:00.755 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:00.755 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:00.755 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:00.755 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:00.755 fio-3.35 00:11:00.755 Starting 4 threads 00:11:02.129 00:11:02.129 job0: (groupid=0, jobs=1): err= 0: pid=78402: Tue Oct 1 06:03:27 2024 00:11:02.129 read: IOPS=5609, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1004msec) 00:11:02.129 slat (usec): min=7, max=4895, avg=88.79, stdev=397.00 00:11:02.129 clat (usec): min=7317, max=16386, avg=11607.95, stdev=949.19 00:11:02.129 lat (usec): min=7341, max=19765, avg=11696.74, stdev=967.95 00:11:02.129 clat percentiles (usec): 00:11:02.129 | 1.00th=[ 8979], 5.00th=[10159], 10.00th=[10552], 20.00th=[11207], 00:11:02.129 | 30.00th=[11469], 40.00th=[11469], 50.00th=[11600], 60.00th=[11731], 00:11:02.129 | 70.00th=[11863], 80.00th=[11994], 90.00th=[12256], 95.00th=[13173], 00:11:02.129 | 99.00th=[14746], 99.50th=[15008], 99.90th=[15795], 99.95th=[16188], 00:11:02.129 | 99.99th=[16450] 00:11:02.129 write: IOPS=5745, BW=22.4MiB/s (23.5MB/s)(22.5MiB/1004msec); 0 zone resets 00:11:02.129 slat (usec): min=10, max=4416, avg=79.39, stdev=436.49 00:11:02.129 clat (usec): min=457, max=15752, avg=10681.64, stdev=1255.78 00:11:02.129 lat (usec): min=4310, max=16173, avg=10761.02, stdev=1315.63 00:11:02.129 clat percentiles (usec): 00:11:02.129 | 1.00th=[ 5407], 5.00th=[ 8717], 10.00th=[ 9765], 20.00th=[10290], 00:11:02.129 | 30.00th=[10421], 40.00th=[10552], 50.00th=[10683], 60.00th=[10814], 00:11:02.130 | 70.00th=[10945], 80.00th=[11207], 90.00th=[11600], 95.00th=[13042], 00:11:02.130 | 99.00th=[14484], 99.50th=[14877], 99.90th=[15664], 99.95th=[15795], 00:11:02.130 | 99.99th=[15795] 00:11:02.130 bw ( KiB/s): min=20616, max=24568, per=34.95%, avg=22592.00, stdev=2794.49, samples=2 00:11:02.130 iops : min= 5154, max= 6142, avg=5648.00, stdev=698.62, samples=2 00:11:02.130 lat (usec) : 500=0.01% 00:11:02.130 lat (msec) : 10=9.17%, 20=90.82% 00:11:02.130 cpu : usr=5.28%, sys=15.05%, ctx=397, majf=0, minf=1 00:11:02.130 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:11:02.130 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:02.130 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:02.130 issued rwts: total=5632,5768,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:02.130 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:02.130 job1: (groupid=0, jobs=1): err= 0: pid=78403: Tue Oct 1 06:03:27 2024 00:11:02.130 read: IOPS=2554, BW=9.98MiB/s (10.5MB/s)(10.0MiB/1002msec) 00:11:02.130 slat (usec): min=7, max=12212, avg=198.50, stdev=915.76 00:11:02.130 clat (usec): min=13546, max=37562, avg=24867.38, stdev=3764.56 00:11:02.130 lat (usec): min=13558, max=41487, avg=25065.88, stdev=3803.65 00:11:02.130 clat percentiles (usec): 00:11:02.130 | 1.00th=[13960], 5.00th=[18744], 10.00th=[20317], 20.00th=[22676], 00:11:02.130 | 30.00th=[23987], 40.00th=[24249], 50.00th=[24773], 60.00th=[25035], 00:11:02.130 | 70.00th=[25297], 80.00th=[26084], 90.00th=[29754], 95.00th=[32637], 00:11:02.130 | 99.00th=[34866], 99.50th=[35914], 99.90th=[36439], 99.95th=[36963], 00:11:02.130 | 99.99th=[37487] 00:11:02.130 write: IOPS=2616, BW=10.2MiB/s (10.7MB/s)(10.2MiB/1002msec); 0 zone resets 00:11:02.130 slat (usec): min=5, max=9413, avg=179.72, stdev=676.08 00:11:02.130 clat (usec): min=620, max=35553, avg=23575.49, stdev=4515.89 00:11:02.130 lat (usec): min=1846, max=35568, avg=23755.21, stdev=4531.75 00:11:02.130 clat percentiles (usec): 00:11:02.130 | 1.00th=[ 2540], 5.00th=[16909], 10.00th=[17957], 20.00th=[21627], 00:11:02.130 | 30.00th=[23200], 40.00th=[23725], 50.00th=[24249], 60.00th=[25035], 00:11:02.130 | 70.00th=[25297], 80.00th=[25560], 90.00th=[26084], 95.00th=[31065], 00:11:02.130 | 99.00th=[33817], 99.50th=[34866], 99.90th=[35390], 99.95th=[35390], 00:11:02.130 | 99.99th=[35390] 00:11:02.130 bw ( KiB/s): min= 8840, max=11640, per=15.84%, avg=10240.00, stdev=1979.90, samples=2 00:11:02.130 iops : min= 2210, max= 2910, avg=2560.00, stdev=494.97, samples=2 00:11:02.130 lat (usec) : 750=0.02% 00:11:02.130 lat (msec) : 2=0.12%, 4=0.56%, 10=0.15%, 20=12.39%, 50=86.76% 00:11:02.130 cpu : usr=2.70%, sys=7.39%, ctx=772, majf=0, minf=4 00:11:02.130 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:11:02.130 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:02.130 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:02.130 issued rwts: total=2560,2622,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:02.130 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:02.130 job2: (groupid=0, jobs=1): err= 0: pid=78405: Tue Oct 1 06:03:27 2024 00:11:02.130 read: IOPS=2544, BW=9.94MiB/s (10.4MB/s)(10.0MiB/1006msec) 00:11:02.130 slat (usec): min=4, max=10151, avg=200.72, stdev=807.11 00:11:02.130 clat (usec): min=15442, max=37243, avg=25625.09, stdev=3950.28 00:11:02.130 lat (usec): min=15469, max=37270, avg=25825.81, stdev=3960.79 00:11:02.130 clat percentiles (usec): 00:11:02.130 | 1.00th=[17433], 5.00th=[20055], 10.00th=[21103], 20.00th=[23725], 00:11:02.130 | 30.00th=[24249], 40.00th=[24249], 50.00th=[24773], 60.00th=[25035], 00:11:02.130 | 70.00th=[25560], 80.00th=[28181], 90.00th=[32375], 95.00th=[34341], 00:11:02.130 | 99.00th=[35914], 99.50th=[36439], 99.90th=[36963], 99.95th=[36963], 00:11:02.130 | 99.99th=[37487] 00:11:02.130 write: IOPS=2730, BW=10.7MiB/s (11.2MB/s)(10.7MiB/1006msec); 0 zone resets 00:11:02.130 slat (usec): min=5, max=9360, avg=170.09, stdev=660.49 00:11:02.130 clat (usec): min=1683, max=35965, avg=22514.90, stdev=4726.59 00:11:02.130 lat (usec): min=6008, max=37474, avg=22685.00, stdev=4732.53 00:11:02.130 clat percentiles (usec): 00:11:02.130 | 1.00th=[ 6587], 5.00th=[13960], 10.00th=[15664], 20.00th=[18220], 00:11:02.130 | 30.00th=[21365], 40.00th=[23462], 50.00th=[23987], 60.00th=[24773], 00:11:02.130 | 70.00th=[25560], 80.00th=[25560], 90.00th=[27132], 95.00th=[28443], 00:11:02.130 | 99.00th=[29754], 99.50th=[30278], 99.90th=[34866], 99.95th=[34866], 00:11:02.130 | 99.99th=[35914] 00:11:02.130 bw ( KiB/s): min= 8664, max=12312, per=16.23%, avg=10488.00, stdev=2579.53, samples=2 00:11:02.130 iops : min= 2166, max= 3078, avg=2622.00, stdev=644.88, samples=2 00:11:02.130 lat (msec) : 2=0.02%, 10=1.13%, 20=14.64%, 50=84.21% 00:11:02.130 cpu : usr=2.59%, sys=7.66%, ctx=810, majf=0, minf=5 00:11:02.130 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:11:02.130 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:02.130 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:02.130 issued rwts: total=2560,2747,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:02.130 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:02.130 job3: (groupid=0, jobs=1): err= 0: pid=78406: Tue Oct 1 06:03:27 2024 00:11:02.130 read: IOPS=4887, BW=19.1MiB/s (20.0MB/s)(19.1MiB/1002msec) 00:11:02.130 slat (usec): min=9, max=3324, avg=97.27, stdev=454.87 00:11:02.130 clat (usec): min=330, max=14038, avg=12825.67, stdev=1107.90 00:11:02.130 lat (usec): min=2870, max=14061, avg=12922.94, stdev=1011.02 00:11:02.130 clat percentiles (usec): 00:11:02.130 | 1.00th=[ 6849], 5.00th=[11207], 10.00th=[12518], 20.00th=[12780], 00:11:02.130 | 30.00th=[12911], 40.00th=[12911], 50.00th=[13042], 60.00th=[13042], 00:11:02.130 | 70.00th=[13173], 80.00th=[13304], 90.00th=[13435], 95.00th=[13566], 00:11:02.130 | 99.00th=[13960], 99.50th=[13960], 99.90th=[13960], 99.95th=[14091], 00:11:02.130 | 99.99th=[14091] 00:11:02.130 write: IOPS=5109, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1002msec); 0 zone resets 00:11:02.130 slat (usec): min=9, max=3042, avg=94.18, stdev=400.67 00:11:02.130 clat (usec): min=9242, max=13602, avg=12432.54, stdev=536.13 00:11:02.130 lat (usec): min=10126, max=13633, avg=12526.72, stdev=357.52 00:11:02.130 clat percentiles (usec): 00:11:02.130 | 1.00th=[10028], 5.00th=[11731], 10.00th=[11994], 20.00th=[12256], 00:11:02.130 | 30.00th=[12387], 40.00th=[12387], 50.00th=[12518], 60.00th=[12518], 00:11:02.130 | 70.00th=[12649], 80.00th=[12780], 90.00th=[13042], 95.00th=[13173], 00:11:02.130 | 99.00th=[13435], 99.50th=[13566], 99.90th=[13566], 99.95th=[13566], 00:11:02.130 | 99.99th=[13566] 00:11:02.130 bw ( KiB/s): min=20480, max=20521, per=31.71%, avg=20500.50, stdev=28.99, samples=2 00:11:02.130 iops : min= 5120, max= 5130, avg=5125.00, stdev= 7.07, samples=2 00:11:02.130 lat (usec) : 500=0.01% 00:11:02.130 lat (msec) : 4=0.32%, 10=1.01%, 20=98.66% 00:11:02.130 cpu : usr=4.80%, sys=14.29%, ctx=314, majf=0, minf=1 00:11:02.130 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:11:02.130 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:02.130 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:02.130 issued rwts: total=4897,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:02.130 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:02.130 00:11:02.130 Run status group 0 (all jobs): 00:11:02.130 READ: bw=60.8MiB/s (63.7MB/s), 9.94MiB/s-21.9MiB/s (10.4MB/s-23.0MB/s), io=61.1MiB (64.1MB), run=1002-1006msec 00:11:02.130 WRITE: bw=63.1MiB/s (66.2MB/s), 10.2MiB/s-22.4MiB/s (10.7MB/s-23.5MB/s), io=63.5MiB (66.6MB), run=1002-1006msec 00:11:02.130 00:11:02.130 Disk stats (read/write): 00:11:02.130 nvme0n1: ios=4730/5120, merge=0/0, ticks=26056/23041, in_queue=49097, util=88.68% 00:11:02.130 nvme0n2: ios=2097/2400, merge=0/0, ticks=24897/26275, in_queue=51172, util=87.68% 00:11:02.130 nvme0n3: ios=2048/2508, merge=0/0, ticks=25884/26581, in_queue=52465, util=89.09% 00:11:02.130 nvme0n4: ios=4096/4608, merge=0/0, ticks=11713/12358, in_queue=24071, util=89.75% 00:11:02.130 06:03:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:11:02.130 06:03:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=78419 00:11:02.130 06:03:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:11:02.130 06:03:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:11:02.130 [global] 00:11:02.130 thread=1 00:11:02.130 invalidate=1 00:11:02.130 rw=read 00:11:02.130 time_based=1 00:11:02.130 runtime=10 00:11:02.130 ioengine=libaio 00:11:02.130 direct=1 00:11:02.130 bs=4096 00:11:02.130 iodepth=1 00:11:02.130 norandommap=1 00:11:02.130 numjobs=1 00:11:02.130 00:11:02.130 [job0] 00:11:02.130 filename=/dev/nvme0n1 00:11:02.130 [job1] 00:11:02.130 filename=/dev/nvme0n2 00:11:02.130 [job2] 00:11:02.130 filename=/dev/nvme0n3 00:11:02.130 [job3] 00:11:02.130 filename=/dev/nvme0n4 00:11:02.130 Could not set queue depth (nvme0n1) 00:11:02.130 Could not set queue depth (nvme0n2) 00:11:02.130 Could not set queue depth (nvme0n3) 00:11:02.130 Could not set queue depth (nvme0n4) 00:11:02.130 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:02.130 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:02.130 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:02.130 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:02.130 fio-3.35 00:11:02.130 Starting 4 threads 00:11:05.445 06:03:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:11:05.445 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=37744640, buflen=4096 00:11:05.445 fio: pid=78462, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:05.445 06:03:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:11:05.445 fio: pid=78461, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:05.445 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=71110656, buflen=4096 00:11:05.445 06:03:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:05.445 06:03:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:11:05.703 fio: pid=78459, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:05.703 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=44711936, buflen=4096 00:11:05.703 06:03:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:05.703 06:03:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:11:05.961 fio: pid=78460, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:05.961 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=66252800, buflen=4096 00:11:05.961 06:03:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:05.961 06:03:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:11:05.961 00:11:05.961 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=78459: Tue Oct 1 06:03:31 2024 00:11:05.961 read: IOPS=3134, BW=12.2MiB/s (12.8MB/s)(42.6MiB/3483msec) 00:11:05.961 slat (usec): min=8, max=15797, avg=20.71, stdev=258.16 00:11:05.961 clat (usec): min=132, max=2461, avg=296.58, stdev=79.49 00:11:05.961 lat (usec): min=145, max=16056, avg=317.29, stdev=269.43 00:11:05.961 clat percentiles (usec): 00:11:05.961 | 1.00th=[ 157], 5.00th=[ 208], 10.00th=[ 221], 20.00th=[ 249], 00:11:05.961 | 30.00th=[ 265], 40.00th=[ 277], 50.00th=[ 285], 60.00th=[ 306], 00:11:05.961 | 70.00th=[ 330], 80.00th=[ 343], 90.00th=[ 359], 95.00th=[ 392], 00:11:05.961 | 99.00th=[ 502], 99.50th=[ 627], 99.90th=[ 996], 99.95th=[ 1450], 00:11:05.961 | 99.99th=[ 2343] 00:11:05.961 bw ( KiB/s): min=10408, max=13368, per=21.06%, avg=12032.00, stdev=1152.92, samples=6 00:11:05.961 iops : min= 2602, max= 3342, avg=3008.00, stdev=288.23, samples=6 00:11:05.961 lat (usec) : 250=20.86%, 500=78.11%, 750=0.77%, 1000=0.16% 00:11:05.961 lat (msec) : 2=0.05%, 4=0.04% 00:11:05.961 cpu : usr=1.29%, sys=4.45%, ctx=10924, majf=0, minf=1 00:11:05.961 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:05.961 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:05.961 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:05.961 issued rwts: total=10917,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:05.961 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:05.961 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=78460: Tue Oct 1 06:03:31 2024 00:11:05.961 read: IOPS=4304, BW=16.8MiB/s (17.6MB/s)(63.2MiB/3758msec) 00:11:05.961 slat (usec): min=7, max=11767, avg=17.03, stdev=160.78 00:11:05.961 clat (usec): min=22, max=6075, avg=213.85, stdev=132.83 00:11:05.961 lat (usec): min=140, max=11998, avg=230.87, stdev=209.51 00:11:05.961 clat percentiles (usec): 00:11:05.961 | 1.00th=[ 139], 5.00th=[ 147], 10.00th=[ 151], 20.00th=[ 157], 00:11:05.961 | 30.00th=[ 163], 40.00th=[ 167], 50.00th=[ 174], 60.00th=[ 182], 00:11:05.961 | 70.00th=[ 210], 80.00th=[ 318], 90.00th=[ 343], 95.00th=[ 355], 00:11:05.961 | 99.00th=[ 453], 99.50th=[ 537], 99.90th=[ 1827], 99.95th=[ 3097], 00:11:05.961 | 99.99th=[ 5932] 00:11:05.961 bw ( KiB/s): min=10840, max=22184, per=29.78%, avg=17009.43, stdev=4830.09, samples=7 00:11:05.961 iops : min= 2710, max= 5546, avg=4252.29, stdev=1207.53, samples=7 00:11:05.961 lat (usec) : 50=0.01%, 250=78.50%, 500=20.81%, 750=0.41%, 1000=0.13% 00:11:05.961 lat (msec) : 2=0.05%, 4=0.07%, 10=0.01% 00:11:05.961 cpu : usr=1.20%, sys=5.75%, ctx=16194, majf=0, minf=1 00:11:05.961 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:05.961 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:05.961 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:05.961 issued rwts: total=16176,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:05.961 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:05.961 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=78461: Tue Oct 1 06:03:31 2024 00:11:05.961 read: IOPS=5381, BW=21.0MiB/s (22.0MB/s)(67.8MiB/3226msec) 00:11:05.961 slat (usec): min=11, max=11774, avg=15.27, stdev=123.02 00:11:05.961 clat (usec): min=139, max=2459, avg=169.15, stdev=30.66 00:11:05.961 lat (usec): min=152, max=11973, avg=184.42, stdev=127.14 00:11:05.961 clat percentiles (usec): 00:11:05.961 | 1.00th=[ 147], 5.00th=[ 151], 10.00th=[ 153], 20.00th=[ 157], 00:11:05.961 | 30.00th=[ 161], 40.00th=[ 163], 50.00th=[ 167], 60.00th=[ 169], 00:11:05.961 | 70.00th=[ 174], 80.00th=[ 180], 90.00th=[ 186], 95.00th=[ 194], 00:11:05.961 | 99.00th=[ 217], 99.50th=[ 231], 99.90th=[ 306], 99.95th=[ 652], 00:11:05.961 | 99.99th=[ 1647] 00:11:05.961 bw ( KiB/s): min=20200, max=22352, per=38.03%, avg=21724.00, stdev=795.29, samples=6 00:11:05.961 iops : min= 5050, max= 5588, avg=5431.00, stdev=198.82, samples=6 00:11:05.961 lat (usec) : 250=99.75%, 500=0.18%, 750=0.03%, 1000=0.01% 00:11:05.961 lat (msec) : 2=0.02%, 4=0.01% 00:11:05.961 cpu : usr=1.58%, sys=6.76%, ctx=17364, majf=0, minf=1 00:11:05.962 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:05.962 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:05.962 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:05.962 issued rwts: total=17362,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:05.962 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:05.962 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=78462: Tue Oct 1 06:03:31 2024 00:11:05.962 read: IOPS=3110, BW=12.1MiB/s (12.7MB/s)(36.0MiB/2963msec) 00:11:05.962 slat (usec): min=12, max=272, avg=18.38, stdev= 5.77 00:11:05.962 clat (usec): min=154, max=2150, avg=301.07, stdev=59.23 00:11:05.962 lat (usec): min=167, max=2168, avg=319.45, stdev=60.34 00:11:05.962 clat percentiles (usec): 00:11:05.962 | 1.00th=[ 192], 5.00th=[ 243], 10.00th=[ 253], 20.00th=[ 265], 00:11:05.962 | 30.00th=[ 273], 40.00th=[ 281], 50.00th=[ 293], 60.00th=[ 314], 00:11:05.962 | 70.00th=[ 326], 80.00th=[ 334], 90.00th=[ 351], 95.00th=[ 363], 00:11:05.962 | 99.00th=[ 474], 99.50th=[ 519], 99.90th=[ 840], 99.95th=[ 1106], 00:11:05.962 | 99.99th=[ 2147] 00:11:05.962 bw ( KiB/s): min=10920, max=13656, per=22.30%, avg=12737.60, stdev=1154.92, samples=5 00:11:05.962 iops : min= 2730, max= 3414, avg=3184.40, stdev=288.73, samples=5 00:11:05.962 lat (usec) : 250=8.75%, 500=90.54%, 750=0.56%, 1000=0.05% 00:11:05.962 lat (msec) : 2=0.08%, 4=0.01% 00:11:05.962 cpu : usr=1.11%, sys=4.89%, ctx=9219, majf=0, minf=2 00:11:05.962 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:05.962 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:05.962 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:05.962 issued rwts: total=9216,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:05.962 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:05.962 00:11:05.962 Run status group 0 (all jobs): 00:11:05.962 READ: bw=55.8MiB/s (58.5MB/s), 12.1MiB/s-21.0MiB/s (12.7MB/s-22.0MB/s), io=210MiB (220MB), run=2963-3758msec 00:11:05.962 00:11:05.962 Disk stats (read/write): 00:11:05.962 nvme0n1: ios=10427/0, merge=0/0, ticks=3064/0, in_queue=3064, util=94.91% 00:11:05.962 nvme0n2: ios=15354/0, merge=0/0, ticks=3300/0, in_queue=3300, util=95.40% 00:11:05.962 nvme0n3: ios=16774/0, merge=0/0, ticks=2868/0, in_queue=2868, util=96.15% 00:11:05.962 nvme0n4: ios=8971/0, merge=0/0, ticks=2714/0, in_queue=2714, util=96.76% 00:11:06.220 06:03:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:06.220 06:03:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:11:06.784 06:03:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:06.784 06:03:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:11:06.784 06:03:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:06.784 06:03:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:11:07.349 06:03:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:07.349 06:03:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:11:07.608 06:03:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:11:07.608 06:03:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 78419 00:11:07.608 06:03:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:11:07.608 06:03:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:07.608 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:07.608 06:03:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:07.608 06:03:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:11:07.608 06:03:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:11:07.608 06:03:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:07.608 06:03:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:11:07.608 06:03:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:07.608 nvmf hotplug test: fio failed as expected 00:11:07.608 06:03:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:11:07.608 06:03:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:11:07.608 06:03:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:11:07.608 06:03:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:07.867 06:03:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:11:07.867 06:03:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:11:07.867 06:03:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:11:07.867 06:03:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:11:07.867 06:03:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:11:07.867 06:03:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # nvmfcleanup 00:11:07.867 06:03:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:11:07.867 06:03:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:07.867 06:03:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:11:07.867 06:03:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:07.867 06:03:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:07.867 rmmod nvme_tcp 00:11:07.867 rmmod nvme_fabrics 00:11:07.867 rmmod nvme_keyring 00:11:07.867 06:03:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:07.867 06:03:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:11:07.867 06:03:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:11:07.867 06:03:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@513 -- # '[' -n 78039 ']' 00:11:07.867 06:03:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@514 -- # killprocess 78039 00:11:07.867 06:03:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@950 -- # '[' -z 78039 ']' 00:11:07.867 06:03:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # kill -0 78039 00:11:07.867 06:03:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # uname 00:11:07.867 06:03:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:07.867 06:03:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 78039 00:11:07.867 06:03:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:07.867 killing process with pid 78039 00:11:07.867 06:03:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:07.867 06:03:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 78039' 00:11:07.867 06:03:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@969 -- # kill 78039 00:11:07.867 06:03:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@974 -- # wait 78039 00:11:08.126 06:03:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:11:08.126 06:03:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:11:08.126 06:03:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:11:08.126 06:03:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:11:08.126 06:03:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@787 -- # iptables-save 00:11:08.126 06:03:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:11:08.126 06:03:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@787 -- # iptables-restore 00:11:08.126 06:03:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:08.126 06:03:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:11:08.126 06:03:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:11:08.126 06:03:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:11:08.126 06:03:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:11:08.126 06:03:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:11:08.126 06:03:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:11:08.126 06:03:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:11:08.126 06:03:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:11:08.126 06:03:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:11:08.126 06:03:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:11:08.126 06:03:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:11:08.126 06:03:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:11:08.385 06:03:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:08.385 06:03:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:08.385 06:03:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:11:08.385 06:03:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:08.385 06:03:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:08.385 06:03:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:08.385 06:03:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@300 -- # return 0 00:11:08.385 ************************************ 00:11:08.385 END TEST nvmf_fio_target 00:11:08.385 ************************************ 00:11:08.385 00:11:08.385 real 0m19.556s 00:11:08.385 user 1m13.690s 00:11:08.385 sys 0m10.135s 00:11:08.385 06:03:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:08.385 06:03:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:08.385 06:03:33 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:11:08.385 06:03:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:08.385 06:03:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:08.385 06:03:33 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:08.385 ************************************ 00:11:08.385 START TEST nvmf_bdevio 00:11:08.385 ************************************ 00:11:08.385 06:03:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:11:08.385 * Looking for test storage... 00:11:08.385 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:08.385 06:03:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:11:08.385 06:03:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1681 -- # lcov --version 00:11:08.385 06:03:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:11:08.645 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:11:08.645 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:08.645 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:08.645 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:08.645 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:11:08.645 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:11:08.645 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:11:08.645 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:11:08.645 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:11:08.645 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:11:08.645 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:11:08.645 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:08.645 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:11:08.645 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:11:08.645 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:08.645 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:08.645 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:11:08.645 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:11:08.645 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:08.645 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:11:08.645 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:11:08.645 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:11:08.645 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:11:08.645 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:08.645 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:11:08.645 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:11:08.645 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:08.645 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:08.645 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:11:08.645 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:08.645 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:11:08.645 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:08.645 --rc genhtml_branch_coverage=1 00:11:08.645 --rc genhtml_function_coverage=1 00:11:08.645 --rc genhtml_legend=1 00:11:08.645 --rc geninfo_all_blocks=1 00:11:08.645 --rc geninfo_unexecuted_blocks=1 00:11:08.645 00:11:08.645 ' 00:11:08.645 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:11:08.645 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:08.645 --rc genhtml_branch_coverage=1 00:11:08.645 --rc genhtml_function_coverage=1 00:11:08.645 --rc genhtml_legend=1 00:11:08.645 --rc geninfo_all_blocks=1 00:11:08.645 --rc geninfo_unexecuted_blocks=1 00:11:08.645 00:11:08.645 ' 00:11:08.645 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:11:08.645 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:08.645 --rc genhtml_branch_coverage=1 00:11:08.645 --rc genhtml_function_coverage=1 00:11:08.645 --rc genhtml_legend=1 00:11:08.645 --rc geninfo_all_blocks=1 00:11:08.645 --rc geninfo_unexecuted_blocks=1 00:11:08.645 00:11:08.645 ' 00:11:08.645 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:11:08.645 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:08.645 --rc genhtml_branch_coverage=1 00:11:08.645 --rc genhtml_function_coverage=1 00:11:08.645 --rc genhtml_legend=1 00:11:08.645 --rc geninfo_all_blocks=1 00:11:08.645 --rc geninfo_unexecuted_blocks=1 00:11:08.645 00:11:08.645 ' 00:11:08.645 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:08.645 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:11:08.646 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:08.646 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:08.646 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:08.646 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:08.646 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:08.646 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:08.646 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:08.646 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:08.646 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:08.646 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:08.646 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 00:11:08.646 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=a979a798-a221-4879-b3c4-5aaa753fde06 00:11:08.646 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:08.646 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:08.646 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:08.646 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:08.646 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:08.646 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:11:08.646 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:08.646 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:08.646 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:08.646 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:08.646 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:08.646 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:08.646 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:11:08.646 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:08.646 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:11:08.646 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:08.646 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:08.646 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:08.646 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:08.646 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:08.646 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:08.646 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:08.646 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:08.646 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:08.646 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:08.646 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:08.646 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:08.646 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:11:08.646 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:11:08.646 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:08.646 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@472 -- # prepare_net_devs 00:11:08.646 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@434 -- # local -g is_hw=no 00:11:08.646 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@436 -- # remove_spdk_ns 00:11:08.646 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:08.646 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:08.646 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:08.646 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:11:08.646 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:11:08.646 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:11:08.646 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:11:08.646 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:11:08.646 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@456 -- # nvmf_veth_init 00:11:08.646 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:08.646 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:11:08.646 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:11:08.646 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:11:08.646 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:08.646 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:11:08.646 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:08.646 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:11:08.646 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:08.646 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:11:08.646 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:08.646 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:08.646 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:08.646 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:08.646 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:08.646 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:08.646 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:11:08.646 Cannot find device "nvmf_init_br" 00:11:08.646 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@162 -- # true 00:11:08.646 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:11:08.646 Cannot find device "nvmf_init_br2" 00:11:08.646 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@163 -- # true 00:11:08.646 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:11:08.646 Cannot find device "nvmf_tgt_br" 00:11:08.646 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@164 -- # true 00:11:08.646 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:11:08.646 Cannot find device "nvmf_tgt_br2" 00:11:08.646 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@165 -- # true 00:11:08.646 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:11:08.646 Cannot find device "nvmf_init_br" 00:11:08.646 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@166 -- # true 00:11:08.646 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:11:08.646 Cannot find device "nvmf_init_br2" 00:11:08.646 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@167 -- # true 00:11:08.646 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:11:08.646 Cannot find device "nvmf_tgt_br" 00:11:08.646 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@168 -- # true 00:11:08.646 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:11:08.646 Cannot find device "nvmf_tgt_br2" 00:11:08.646 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@169 -- # true 00:11:08.646 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:11:08.646 Cannot find device "nvmf_br" 00:11:08.646 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@170 -- # true 00:11:08.646 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:11:08.646 Cannot find device "nvmf_init_if" 00:11:08.646 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@171 -- # true 00:11:08.646 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:11:08.647 Cannot find device "nvmf_init_if2" 00:11:08.647 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@172 -- # true 00:11:08.647 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:08.647 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:08.647 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@173 -- # true 00:11:08.647 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:08.647 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:08.647 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@174 -- # true 00:11:08.647 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:11:08.647 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:08.647 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:11:08.647 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:08.647 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:08.905 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:08.905 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:08.905 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:08.905 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:11:08.905 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:11:08.905 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:11:08.905 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:11:08.905 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:11:08.905 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:11:08.905 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:11:08.905 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:11:08.905 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:11:08.905 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:08.905 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:08.905 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:08.905 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:11:08.905 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:11:08.905 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:11:08.905 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:11:08.905 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:08.905 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:08.905 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:08.905 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:11:08.905 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:11:08.905 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:11:08.905 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:08.905 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:11:08.905 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:11:08.905 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:08.905 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.063 ms 00:11:08.905 00:11:08.905 --- 10.0.0.3 ping statistics --- 00:11:08.905 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:08.905 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:11:08.905 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:11:08.905 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:11:08.905 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.041 ms 00:11:08.905 00:11:08.905 --- 10.0.0.4 ping statistics --- 00:11:08.905 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:08.905 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:11:08.905 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:08.905 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:08.905 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 00:11:08.905 00:11:08.905 --- 10.0.0.1 ping statistics --- 00:11:08.905 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:08.905 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 00:11:08.905 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:11:08.905 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:08.905 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.053 ms 00:11:08.905 00:11:08.905 --- 10.0.0.2 ping statistics --- 00:11:08.905 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:08.905 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:11:08.905 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:08.905 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@457 -- # return 0 00:11:08.905 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:11:08.905 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:08.905 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:11:08.905 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:11:08.905 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:08.905 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:11:08.905 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:11:08.905 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:11:08.905 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:11:08.905 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:08.905 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:08.905 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@505 -- # nvmfpid=78781 00:11:08.905 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@506 -- # waitforlisten 78781 00:11:08.905 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@831 -- # '[' -z 78781 ']' 00:11:08.905 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:11:08.905 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:08.905 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:08.905 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:08.905 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:08.905 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:08.906 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:09.163 [2024-10-01 06:03:34.545987] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:11:09.163 [2024-10-01 06:03:34.546070] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:09.163 [2024-10-01 06:03:34.682460] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:09.163 [2024-10-01 06:03:34.718662] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:09.163 [2024-10-01 06:03:34.718720] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:09.163 [2024-10-01 06:03:34.718733] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:09.163 [2024-10-01 06:03:34.718741] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:09.163 [2024-10-01 06:03:34.718748] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:09.163 [2024-10-01 06:03:34.719498] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:11:09.163 [2024-10-01 06:03:34.719643] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 5 00:11:09.163 [2024-10-01 06:03:34.719709] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 6 00:11:09.163 [2024-10-01 06:03:34.719717] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:11:09.163 [2024-10-01 06:03:34.749722] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:09.421 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:09.421 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # return 0 00:11:09.421 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:11:09.421 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:09.421 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:09.421 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:09.421 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:09.421 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.421 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:09.421 [2024-10-01 06:03:34.852134] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:09.421 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.421 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:09.421 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.421 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:09.421 Malloc0 00:11:09.421 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.421 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:09.421 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.421 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:09.421 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.421 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:09.421 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.421 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:09.421 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.421 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:11:09.421 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.421 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:09.421 [2024-10-01 06:03:34.898973] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:11:09.421 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.421 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:11:09.421 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:11:09.421 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@556 -- # config=() 00:11:09.421 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@556 -- # local subsystem config 00:11:09.421 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:11:09.421 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:11:09.421 { 00:11:09.421 "params": { 00:11:09.421 "name": "Nvme$subsystem", 00:11:09.421 "trtype": "$TEST_TRANSPORT", 00:11:09.421 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:09.421 "adrfam": "ipv4", 00:11:09.421 "trsvcid": "$NVMF_PORT", 00:11:09.421 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:09.421 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:09.421 "hdgst": ${hdgst:-false}, 00:11:09.421 "ddgst": ${ddgst:-false} 00:11:09.421 }, 00:11:09.421 "method": "bdev_nvme_attach_controller" 00:11:09.421 } 00:11:09.421 EOF 00:11:09.421 )") 00:11:09.421 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@578 -- # cat 00:11:09.421 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@580 -- # jq . 00:11:09.421 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@581 -- # IFS=, 00:11:09.421 06:03:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:11:09.421 "params": { 00:11:09.421 "name": "Nvme1", 00:11:09.421 "trtype": "tcp", 00:11:09.421 "traddr": "10.0.0.3", 00:11:09.421 "adrfam": "ipv4", 00:11:09.421 "trsvcid": "4420", 00:11:09.421 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:09.421 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:09.421 "hdgst": false, 00:11:09.421 "ddgst": false 00:11:09.421 }, 00:11:09.421 "method": "bdev_nvme_attach_controller" 00:11:09.421 }' 00:11:09.421 [2024-10-01 06:03:34.972753] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:11:09.421 [2024-10-01 06:03:34.972869] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78808 ] 00:11:09.678 [2024-10-01 06:03:35.119418] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:09.678 [2024-10-01 06:03:35.162066] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:11:09.678 [2024-10-01 06:03:35.162210] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:11:09.678 [2024-10-01 06:03:35.162218] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:09.678 [2024-10-01 06:03:35.204621] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:09.936 I/O targets: 00:11:09.936 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:11:09.936 00:11:09.936 00:11:09.936 CUnit - A unit testing framework for C - Version 2.1-3 00:11:09.936 http://cunit.sourceforge.net/ 00:11:09.936 00:11:09.936 00:11:09.936 Suite: bdevio tests on: Nvme1n1 00:11:09.936 Test: blockdev write read block ...passed 00:11:09.936 Test: blockdev write zeroes read block ...passed 00:11:09.936 Test: blockdev write zeroes read no split ...passed 00:11:09.936 Test: blockdev write zeroes read split ...passed 00:11:09.936 Test: blockdev write zeroes read split partial ...passed 00:11:09.936 Test: blockdev reset ...[2024-10-01 06:03:35.331686] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:11:09.936 [2024-10-01 06:03:35.331991] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12b20d0 (9): Bad file descriptor 00:11:09.936 passed 00:11:09.936 Test: blockdev write read 8 blocks ...[2024-10-01 06:03:35.349485] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:11:09.936 passed 00:11:09.936 Test: blockdev write read size > 128k ...passed 00:11:09.936 Test: blockdev write read invalid size ...passed 00:11:09.936 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:09.936 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:09.936 Test: blockdev write read max offset ...passed 00:11:09.936 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:09.936 Test: blockdev writev readv 8 blocks ...passed 00:11:09.936 Test: blockdev writev readv 30 x 1block ...passed 00:11:09.936 Test: blockdev writev readv block ...passed 00:11:09.936 Test: blockdev writev readv size > 128k ...passed 00:11:09.936 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:09.936 Test: blockdev comparev and writev ...[2024-10-01 06:03:35.357940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:09.936 [2024-10-01 06:03:35.358151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:11:09.936 [2024-10-01 06:03:35.358188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:09.936 [2024-10-01 06:03:35.358203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:11:09.936 [2024-10-01 06:03:35.358653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:09.936 [2024-10-01 06:03:35.358696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:11:09.936 [2024-10-01 06:03:35.358721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:09.936 [2024-10-01 06:03:35.358734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:11:09.936 [2024-10-01 06:03:35.359148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:09.936 [2024-10-01 06:03:35.359176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:11:09.936 [2024-10-01 06:03:35.359198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:09.936 [2024-10-01 06:03:35.359211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:11:09.936 [2024-10-01 06:03:35.359686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:09.936 [2024-10-01 06:03:35.359722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:11:09.936 [2024-10-01 06:03:35.359745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:09.936 [2024-10-01 06:03:35.359757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:11:09.936 passed 00:11:09.936 Test: blockdev nvme passthru rw ...passed 00:11:09.936 Test: blockdev nvme passthru vendor specific ...[2024-10-01 06:03:35.360694] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:09.936 [2024-10-01 06:03:35.360724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:11:09.936 passed 00:11:09.936 Test: blockdev nvme admin passthru ...[2024-10-01 06:03:35.360930] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:09.936 [2024-10-01 06:03:35.360963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:11:09.936 [2024-10-01 06:03:35.361158] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:09.936 [2024-10-01 06:03:35.361185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:11:09.936 [2024-10-01 06:03:35.361355] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:09.936 [2024-10-01 06:03:35.361381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:11:09.936 passed 00:11:09.936 Test: blockdev copy ...passed 00:11:09.936 00:11:09.936 Run Summary: Type Total Ran Passed Failed Inactive 00:11:09.936 suites 1 1 n/a 0 0 00:11:09.936 tests 23 23 23 0 0 00:11:09.936 asserts 152 152 152 0 n/a 00:11:09.936 00:11:09.936 Elapsed time = 0.142 seconds 00:11:09.936 06:03:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:09.936 06:03:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.936 06:03:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:09.936 06:03:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.936 06:03:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:11:09.936 06:03:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:11:09.936 06:03:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # nvmfcleanup 00:11:09.936 06:03:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:11:10.194 06:03:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:10.194 06:03:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:11:10.194 06:03:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:10.194 06:03:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:10.194 rmmod nvme_tcp 00:11:10.194 rmmod nvme_fabrics 00:11:10.194 rmmod nvme_keyring 00:11:10.194 06:03:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:10.194 06:03:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:11:10.194 06:03:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:11:10.194 06:03:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@513 -- # '[' -n 78781 ']' 00:11:10.194 06:03:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@514 -- # killprocess 78781 00:11:10.194 06:03:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@950 -- # '[' -z 78781 ']' 00:11:10.194 06:03:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # kill -0 78781 00:11:10.194 06:03:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # uname 00:11:10.194 06:03:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:10.194 06:03:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 78781 00:11:10.194 killing process with pid 78781 00:11:10.194 06:03:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:11:10.194 06:03:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:11:10.194 06:03:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@968 -- # echo 'killing process with pid 78781' 00:11:10.194 06:03:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@969 -- # kill 78781 00:11:10.194 06:03:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@974 -- # wait 78781 00:11:10.451 06:03:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:11:10.451 06:03:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:11:10.451 06:03:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:11:10.451 06:03:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:11:10.451 06:03:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@787 -- # iptables-save 00:11:10.451 06:03:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:11:10.451 06:03:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@787 -- # iptables-restore 00:11:10.451 06:03:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:10.451 06:03:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:11:10.451 06:03:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:11:10.451 06:03:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:11:10.451 06:03:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:11:10.451 06:03:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:11:10.451 06:03:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:11:10.451 06:03:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:11:10.451 06:03:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:11:10.452 06:03:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:11:10.452 06:03:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:11:10.452 06:03:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:11:10.452 06:03:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:11:10.452 06:03:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:10.452 06:03:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:10.452 06:03:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@246 -- # remove_spdk_ns 00:11:10.452 06:03:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:10.452 06:03:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:10.452 06:03:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:10.452 06:03:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@300 -- # return 0 00:11:10.452 00:11:10.452 real 0m2.191s 00:11:10.452 user 0m5.410s 00:11:10.452 sys 0m0.782s 00:11:10.452 06:03:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:10.452 ************************************ 00:11:10.452 END TEST nvmf_bdevio 00:11:10.452 ************************************ 00:11:10.452 06:03:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:10.709 06:03:36 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:11:10.709 ************************************ 00:11:10.709 END TEST nvmf_target_core 00:11:10.709 ************************************ 00:11:10.709 00:11:10.709 real 2m26.977s 00:11:10.709 user 6m22.715s 00:11:10.709 sys 0m54.053s 00:11:10.709 06:03:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:10.709 06:03:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:10.709 06:03:36 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:11:10.709 06:03:36 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:10.709 06:03:36 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:10.709 06:03:36 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:10.709 ************************************ 00:11:10.709 START TEST nvmf_target_extra 00:11:10.709 ************************************ 00:11:10.709 06:03:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:11:10.709 * Looking for test storage... 00:11:10.709 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:11:10.709 06:03:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:11:10.709 06:03:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1681 -- # lcov --version 00:11:10.709 06:03:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:11:10.709 06:03:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:11:10.709 06:03:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:10.709 06:03:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:10.709 06:03:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:10.709 06:03:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:11:10.709 06:03:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:11:10.709 06:03:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:11:10.709 06:03:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:11:10.709 06:03:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:11:10.709 06:03:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:11:10.709 06:03:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:11:10.709 06:03:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:10.709 06:03:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:11:10.709 06:03:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:11:10.709 06:03:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:10.709 06:03:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:10.709 06:03:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:11:10.709 06:03:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:11:10.709 06:03:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:10.709 06:03:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:11:10.709 06:03:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:11:10.709 06:03:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:11:10.709 06:03:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:11:10.709 06:03:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:10.709 06:03:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:11:10.709 06:03:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:11:10.709 06:03:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:10.709 06:03:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:10.709 06:03:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:11:10.709 06:03:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:10.709 06:03:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:11:10.710 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:10.710 --rc genhtml_branch_coverage=1 00:11:10.710 --rc genhtml_function_coverage=1 00:11:10.710 --rc genhtml_legend=1 00:11:10.710 --rc geninfo_all_blocks=1 00:11:10.710 --rc geninfo_unexecuted_blocks=1 00:11:10.710 00:11:10.710 ' 00:11:10.710 06:03:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:11:10.710 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:10.710 --rc genhtml_branch_coverage=1 00:11:10.710 --rc genhtml_function_coverage=1 00:11:10.710 --rc genhtml_legend=1 00:11:10.710 --rc geninfo_all_blocks=1 00:11:10.710 --rc geninfo_unexecuted_blocks=1 00:11:10.710 00:11:10.710 ' 00:11:10.710 06:03:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:11:10.710 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:10.710 --rc genhtml_branch_coverage=1 00:11:10.710 --rc genhtml_function_coverage=1 00:11:10.710 --rc genhtml_legend=1 00:11:10.710 --rc geninfo_all_blocks=1 00:11:10.710 --rc geninfo_unexecuted_blocks=1 00:11:10.710 00:11:10.710 ' 00:11:10.710 06:03:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:11:10.710 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:10.710 --rc genhtml_branch_coverage=1 00:11:10.710 --rc genhtml_function_coverage=1 00:11:10.710 --rc genhtml_legend=1 00:11:10.710 --rc geninfo_all_blocks=1 00:11:10.710 --rc geninfo_unexecuted_blocks=1 00:11:10.710 00:11:10.710 ' 00:11:10.710 06:03:36 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:10.710 06:03:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:11:10.710 06:03:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:10.968 06:03:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:10.968 06:03:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:10.968 06:03:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:10.968 06:03:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:10.968 06:03:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:10.968 06:03:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:10.968 06:03:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:10.968 06:03:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:10.968 06:03:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:10.968 06:03:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 00:11:10.968 06:03:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=a979a798-a221-4879-b3c4-5aaa753fde06 00:11:10.968 06:03:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:10.968 06:03:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:10.968 06:03:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:10.968 06:03:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:10.968 06:03:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:10.968 06:03:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:11:10.968 06:03:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:10.968 06:03:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:10.968 06:03:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:10.968 06:03:36 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:10.968 06:03:36 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:10.968 06:03:36 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:10.968 06:03:36 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:11:10.968 06:03:36 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:10.968 06:03:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:11:10.968 06:03:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:10.968 06:03:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:10.968 06:03:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:10.968 06:03:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:10.968 06:03:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:10.968 06:03:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:10.968 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:10.968 06:03:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:10.968 06:03:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:10.968 06:03:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:10.968 06:03:36 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:11:10.968 06:03:36 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:11:10.968 06:03:36 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 1 -eq 0 ]] 00:11:10.968 06:03:36 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:11:10.969 06:03:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:10.969 06:03:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:10.969 06:03:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:10.969 ************************************ 00:11:10.969 START TEST nvmf_auth_target 00:11:10.969 ************************************ 00:11:10.969 06:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:11:10.969 * Looking for test storage... 00:11:10.969 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:10.969 06:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:11:10.969 06:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1681 -- # lcov --version 00:11:10.969 06:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:11:10.969 06:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:11:10.969 06:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:10.969 06:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:10.969 06:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:10.969 06:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:11:10.969 06:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:11:10.969 06:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:11:10.969 06:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:11:10.969 06:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:11:10.969 06:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:11:10.969 06:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:11:10.969 06:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:10.969 06:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:11:10.969 06:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:11:10.969 06:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:10.969 06:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:10.969 06:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:11:10.969 06:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:11:10.969 06:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:10.969 06:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:11:10.969 06:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:11:10.969 06:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:11:10.969 06:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:11:10.969 06:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:10.969 06:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:11:10.969 06:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:11:10.969 06:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:10.969 06:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:10.969 06:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:11:10.969 06:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:10.969 06:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:11:10.969 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:10.969 --rc genhtml_branch_coverage=1 00:11:10.969 --rc genhtml_function_coverage=1 00:11:10.969 --rc genhtml_legend=1 00:11:10.969 --rc geninfo_all_blocks=1 00:11:10.969 --rc geninfo_unexecuted_blocks=1 00:11:10.969 00:11:10.969 ' 00:11:10.969 06:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:11:10.969 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:10.969 --rc genhtml_branch_coverage=1 00:11:10.969 --rc genhtml_function_coverage=1 00:11:10.969 --rc genhtml_legend=1 00:11:10.969 --rc geninfo_all_blocks=1 00:11:10.969 --rc geninfo_unexecuted_blocks=1 00:11:10.969 00:11:10.969 ' 00:11:10.969 06:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:11:10.969 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:10.969 --rc genhtml_branch_coverage=1 00:11:10.969 --rc genhtml_function_coverage=1 00:11:10.969 --rc genhtml_legend=1 00:11:10.969 --rc geninfo_all_blocks=1 00:11:10.969 --rc geninfo_unexecuted_blocks=1 00:11:10.969 00:11:10.969 ' 00:11:10.969 06:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:11:10.969 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:10.969 --rc genhtml_branch_coverage=1 00:11:10.969 --rc genhtml_function_coverage=1 00:11:10.969 --rc genhtml_legend=1 00:11:10.969 --rc geninfo_all_blocks=1 00:11:10.969 --rc geninfo_unexecuted_blocks=1 00:11:10.969 00:11:10.969 ' 00:11:10.969 06:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:10.969 06:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:11:10.969 06:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:10.969 06:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:10.969 06:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:10.969 06:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:10.969 06:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:10.969 06:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:10.969 06:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:10.969 06:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:10.969 06:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:10.969 06:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:10.969 06:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 00:11:10.969 06:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=a979a798-a221-4879-b3c4-5aaa753fde06 00:11:10.969 06:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:10.969 06:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:10.969 06:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:10.969 06:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:10.969 06:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:10.969 06:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:11:10.969 06:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:10.969 06:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:10.969 06:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:10.969 06:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:10.969 06:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:10.969 06:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:10.969 06:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:11:10.969 06:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:10.969 06:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:11:10.969 06:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:10.969 06:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:10.969 06:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:10.969 06:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:10.969 06:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:10.969 06:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:10.970 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:10.970 06:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:10.970 06:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:10.970 06:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:10.970 06:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:11:10.970 06:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:11:10.970 06:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:11:10.970 06:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 00:11:10.970 06:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:11:10.970 06:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:11:10.970 06:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:11:10.970 06:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:11:10.970 06:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:11:10.970 06:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:10.970 06:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@472 -- # prepare_net_devs 00:11:10.970 06:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@434 -- # local -g is_hw=no 00:11:10.970 06:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@436 -- # remove_spdk_ns 00:11:10.970 06:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:10.970 06:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:10.970 06:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:10.970 06:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:11:10.970 06:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:11:10.970 06:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:11:10.970 06:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:11:10.970 06:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:11:10.970 06:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@456 -- # nvmf_veth_init 00:11:10.970 06:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:10.970 06:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:11:10.970 06:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:11:10.970 06:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:11:10.970 06:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:10.970 06:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:11:10.970 06:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:10.970 06:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:11:10.970 06:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:10.970 06:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:11:10.970 06:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:10.970 06:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:10.970 06:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:10.970 06:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:10.970 06:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:10.970 06:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:10.970 06:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:11:10.970 Cannot find device "nvmf_init_br" 00:11:10.970 06:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@162 -- # true 00:11:10.970 06:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:11:11.227 Cannot find device "nvmf_init_br2" 00:11:11.227 06:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@163 -- # true 00:11:11.227 06:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:11:11.227 Cannot find device "nvmf_tgt_br" 00:11:11.227 06:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@164 -- # true 00:11:11.227 06:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:11:11.227 Cannot find device "nvmf_tgt_br2" 00:11:11.227 06:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@165 -- # true 00:11:11.227 06:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:11:11.227 Cannot find device "nvmf_init_br" 00:11:11.227 06:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@166 -- # true 00:11:11.227 06:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:11:11.227 Cannot find device "nvmf_init_br2" 00:11:11.227 06:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@167 -- # true 00:11:11.227 06:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:11:11.227 Cannot find device "nvmf_tgt_br" 00:11:11.227 06:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@168 -- # true 00:11:11.227 06:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:11:11.227 Cannot find device "nvmf_tgt_br2" 00:11:11.228 06:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@169 -- # true 00:11:11.228 06:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:11:11.228 Cannot find device "nvmf_br" 00:11:11.228 06:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@170 -- # true 00:11:11.228 06:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:11:11.228 Cannot find device "nvmf_init_if" 00:11:11.228 06:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@171 -- # true 00:11:11.228 06:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:11:11.228 Cannot find device "nvmf_init_if2" 00:11:11.228 06:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@172 -- # true 00:11:11.228 06:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:11.228 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:11.228 06:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@173 -- # true 00:11:11.228 06:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:11.228 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:11.228 06:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@174 -- # true 00:11:11.228 06:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:11:11.228 06:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:11.228 06:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:11:11.228 06:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:11.228 06:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:11.228 06:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:11.228 06:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:11.228 06:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:11.228 06:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:11:11.228 06:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:11:11.228 06:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:11:11.228 06:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:11:11.228 06:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:11:11.228 06:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:11:11.228 06:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:11:11.228 06:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:11:11.228 06:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:11:11.228 06:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:11.228 06:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:11.228 06:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:11.228 06:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:11:11.228 06:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:11:11.486 06:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:11:11.486 06:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:11:11.486 06:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:11.486 06:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:11.486 06:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:11.486 06:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:11:11.486 06:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:11:11.486 06:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:11:11.486 06:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:11.486 06:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:11:11.486 06:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:11:11.486 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:11.486 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.071 ms 00:11:11.486 00:11:11.486 --- 10.0.0.3 ping statistics --- 00:11:11.486 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:11.486 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:11:11.486 06:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:11:11.486 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:11:11.486 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.047 ms 00:11:11.486 00:11:11.486 --- 10.0.0.4 ping statistics --- 00:11:11.486 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:11.486 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:11:11.486 06:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:11.486 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:11.486 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:11:11.486 00:11:11.486 --- 10.0.0.1 ping statistics --- 00:11:11.486 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:11.486 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:11:11.486 06:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:11:11.486 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:11.486 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.076 ms 00:11:11.486 00:11:11.486 --- 10.0.0.2 ping statistics --- 00:11:11.486 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:11.486 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:11:11.486 06:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:11.486 06:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@457 -- # return 0 00:11:11.486 06:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:11:11.486 06:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:11.486 06:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:11:11.486 06:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:11:11.486 06:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:11.486 06:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:11:11.486 06:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:11:11.486 06:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:11:11.486 06:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:11:11.486 06:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:11.486 06:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:11.486 06:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@505 -- # nvmfpid=79087 00:11:11.486 06:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:11:11.486 06:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # waitforlisten 79087 00:11:11.486 06:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 79087 ']' 00:11:11.486 06:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:11.486 06:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:11.486 06:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:11.486 06:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:11.486 06:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:11.743 06:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:11.743 06:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:11:11.743 06:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:11:11.743 06:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:11.743 06:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:11.743 06:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:11.743 06:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=79117 00:11:11.743 06:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:11:11.743 06:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:11:11.743 06:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:11:11.744 06:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # local digest len file key 00:11:11.744 06:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:11:11.744 06:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # local -A digests 00:11:11.744 06:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digest=null 00:11:11.744 06:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # len=48 00:11:11.744 06:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # xxd -p -c0 -l 24 /dev/urandom 00:11:11.744 06:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # key=34ad2841be67428aeab17430e1f6a9c72af281f41f684d0a 00:11:11.744 06:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # mktemp -t spdk.key-null.XXX 00:11:11.744 06:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-null.2bP 00:11:11.744 06:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # format_dhchap_key 34ad2841be67428aeab17430e1f6a9c72af281f41f684d0a 0 00:11:11.744 06:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@743 -- # format_key DHHC-1 34ad2841be67428aeab17430e1f6a9c72af281f41f684d0a 0 00:11:11.744 06:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # local prefix key digest 00:11:11.744 06:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:11:11.744 06:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # key=34ad2841be67428aeab17430e1f6a9c72af281f41f684d0a 00:11:11.744 06:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # digest=0 00:11:11.744 06:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # python - 00:11:12.001 06:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-null.2bP 00:11:12.001 06:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-null.2bP 00:11:12.001 06:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.2bP 00:11:12.001 06:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:11:12.001 06:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # local digest len file key 00:11:12.001 06:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:11:12.001 06:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # local -A digests 00:11:12.001 06:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digest=sha512 00:11:12.001 06:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # len=64 00:11:12.001 06:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # xxd -p -c0 -l 32 /dev/urandom 00:11:12.001 06:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # key=3c337d7ca3549f122af291fa24364c59ae2920772aca9b0dad99ab876d5a4033 00:11:12.001 06:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha512.XXX 00:11:12.001 06:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha512.dzY 00:11:12.001 06:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # format_dhchap_key 3c337d7ca3549f122af291fa24364c59ae2920772aca9b0dad99ab876d5a4033 3 00:11:12.001 06:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@743 -- # format_key DHHC-1 3c337d7ca3549f122af291fa24364c59ae2920772aca9b0dad99ab876d5a4033 3 00:11:12.001 06:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # local prefix key digest 00:11:12.001 06:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:11:12.001 06:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # key=3c337d7ca3549f122af291fa24364c59ae2920772aca9b0dad99ab876d5a4033 00:11:12.001 06:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # digest=3 00:11:12.001 06:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # python - 00:11:12.001 06:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha512.dzY 00:11:12.001 06:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha512.dzY 00:11:12.001 06:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.dzY 00:11:12.001 06:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:11:12.001 06:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # local digest len file key 00:11:12.001 06:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:11:12.001 06:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # local -A digests 00:11:12.001 06:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digest=sha256 00:11:12.001 06:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # len=32 00:11:12.001 06:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # xxd -p -c0 -l 16 /dev/urandom 00:11:12.002 06:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # key=0415f46216ffacb8871e81f4425d125c 00:11:12.002 06:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha256.XXX 00:11:12.002 06:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha256.kdz 00:11:12.002 06:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # format_dhchap_key 0415f46216ffacb8871e81f4425d125c 1 00:11:12.002 06:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@743 -- # format_key DHHC-1 0415f46216ffacb8871e81f4425d125c 1 00:11:12.002 06:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # local prefix key digest 00:11:12.002 06:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:11:12.002 06:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # key=0415f46216ffacb8871e81f4425d125c 00:11:12.002 06:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # digest=1 00:11:12.002 06:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # python - 00:11:12.002 06:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha256.kdz 00:11:12.002 06:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha256.kdz 00:11:12.002 06:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.kdz 00:11:12.002 06:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:11:12.002 06:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # local digest len file key 00:11:12.002 06:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:11:12.002 06:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # local -A digests 00:11:12.002 06:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digest=sha384 00:11:12.002 06:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # len=48 00:11:12.002 06:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # xxd -p -c0 -l 24 /dev/urandom 00:11:12.002 06:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # key=92b384cabeba1d46cc346d2a2dfddae03246faaeda051f1a 00:11:12.002 06:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha384.XXX 00:11:12.002 06:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha384.04y 00:11:12.002 06:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # format_dhchap_key 92b384cabeba1d46cc346d2a2dfddae03246faaeda051f1a 2 00:11:12.002 06:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@743 -- # format_key DHHC-1 92b384cabeba1d46cc346d2a2dfddae03246faaeda051f1a 2 00:11:12.002 06:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # local prefix key digest 00:11:12.002 06:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:11:12.002 06:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # key=92b384cabeba1d46cc346d2a2dfddae03246faaeda051f1a 00:11:12.002 06:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # digest=2 00:11:12.002 06:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # python - 00:11:12.002 06:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha384.04y 00:11:12.002 06:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha384.04y 00:11:12.002 06:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.04y 00:11:12.002 06:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:11:12.002 06:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # local digest len file key 00:11:12.002 06:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:11:12.002 06:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # local -A digests 00:11:12.002 06:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digest=sha384 00:11:12.002 06:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # len=48 00:11:12.002 06:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # xxd -p -c0 -l 24 /dev/urandom 00:11:12.002 06:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # key=83c4ae72f079cebd3a4da552e5277a6cb65c867c843e1b80 00:11:12.002 06:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha384.XXX 00:11:12.002 06:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha384.AO0 00:11:12.002 06:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # format_dhchap_key 83c4ae72f079cebd3a4da552e5277a6cb65c867c843e1b80 2 00:11:12.002 06:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@743 -- # format_key DHHC-1 83c4ae72f079cebd3a4da552e5277a6cb65c867c843e1b80 2 00:11:12.002 06:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # local prefix key digest 00:11:12.002 06:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:11:12.002 06:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # key=83c4ae72f079cebd3a4da552e5277a6cb65c867c843e1b80 00:11:12.002 06:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # digest=2 00:11:12.002 06:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # python - 00:11:12.260 06:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha384.AO0 00:11:12.260 06:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha384.AO0 00:11:12.260 06:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.AO0 00:11:12.260 06:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:11:12.260 06:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # local digest len file key 00:11:12.260 06:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:11:12.260 06:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # local -A digests 00:11:12.260 06:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digest=sha256 00:11:12.260 06:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # len=32 00:11:12.260 06:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # xxd -p -c0 -l 16 /dev/urandom 00:11:12.260 06:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # key=00b7ab119dd815bf8ad37481d00c02b2 00:11:12.260 06:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha256.XXX 00:11:12.260 06:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha256.EP3 00:11:12.260 06:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # format_dhchap_key 00b7ab119dd815bf8ad37481d00c02b2 1 00:11:12.260 06:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@743 -- # format_key DHHC-1 00b7ab119dd815bf8ad37481d00c02b2 1 00:11:12.260 06:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # local prefix key digest 00:11:12.260 06:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:11:12.260 06:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # key=00b7ab119dd815bf8ad37481d00c02b2 00:11:12.260 06:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # digest=1 00:11:12.260 06:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # python - 00:11:12.260 06:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha256.EP3 00:11:12.260 06:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha256.EP3 00:11:12.260 06:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.EP3 00:11:12.260 06:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:11:12.260 06:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # local digest len file key 00:11:12.260 06:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:11:12.260 06:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@748 -- # local -A digests 00:11:12.260 06:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # digest=sha512 00:11:12.260 06:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@750 -- # len=64 00:11:12.260 06:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # xxd -p -c0 -l 32 /dev/urandom 00:11:12.260 06:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # key=228fdb3d225acdd36bc9e2b444491fe8bbb56a4b21bf66df5d29a07995cb284e 00:11:12.260 06:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha512.XXX 00:11:12.260 06:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha512.sQC 00:11:12.260 06:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@753 -- # format_dhchap_key 228fdb3d225acdd36bc9e2b444491fe8bbb56a4b21bf66df5d29a07995cb284e 3 00:11:12.260 06:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@743 -- # format_key DHHC-1 228fdb3d225acdd36bc9e2b444491fe8bbb56a4b21bf66df5d29a07995cb284e 3 00:11:12.260 06:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # local prefix key digest 00:11:12.260 06:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:11:12.260 06:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # key=228fdb3d225acdd36bc9e2b444491fe8bbb56a4b21bf66df5d29a07995cb284e 00:11:12.260 06:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # digest=3 00:11:12.260 06:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # python - 00:11:12.260 06:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha512.sQC 00:11:12.260 06:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha512.sQC 00:11:12.260 06:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.sQC 00:11:12.260 06:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:11:12.260 06:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 79087 00:11:12.260 06:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 79087 ']' 00:11:12.260 06:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:12.260 06:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:12.260 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:12.260 06:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:12.260 06:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:12.260 06:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:12.518 06:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:12.518 06:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:11:12.518 06:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 79117 /var/tmp/host.sock 00:11:12.518 06:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 79117 ']' 00:11:12.518 06:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/host.sock 00:11:12.518 06:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:12.518 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:11:12.518 06:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:11:12.518 06:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:12.518 06:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:12.775 06:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:12.775 06:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:11:12.775 06:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:11:12.775 06:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.775 06:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:12.775 06:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.775 06:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:11:12.775 06:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.2bP 00:11:12.775 06:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.775 06:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:13.032 06:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.032 06:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.2bP 00:11:13.032 06:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.2bP 00:11:13.289 06:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.dzY ]] 00:11:13.289 06:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.dzY 00:11:13.289 06:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.289 06:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:13.289 06:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.289 06:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.dzY 00:11:13.289 06:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.dzY 00:11:13.547 06:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:11:13.547 06:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.kdz 00:11:13.547 06:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.547 06:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:13.547 06:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.547 06:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.kdz 00:11:13.547 06:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.kdz 00:11:13.804 06:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.04y ]] 00:11:13.804 06:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.04y 00:11:13.804 06:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.804 06:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:13.804 06:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.804 06:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.04y 00:11:13.804 06:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.04y 00:11:14.061 06:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:11:14.061 06:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.AO0 00:11:14.061 06:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.061 06:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:14.061 06:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.061 06:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.AO0 00:11:14.061 06:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.AO0 00:11:14.318 06:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.EP3 ]] 00:11:14.318 06:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.EP3 00:11:14.318 06:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.318 06:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:14.318 06:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.318 06:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.EP3 00:11:14.318 06:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.EP3 00:11:14.576 06:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:11:14.576 06:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.sQC 00:11:14.576 06:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.576 06:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:14.576 06:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.576 06:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.sQC 00:11:14.576 06:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.sQC 00:11:14.833 06:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:11:14.833 06:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:11:14.833 06:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:14.833 06:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:14.833 06:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:11:14.833 06:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:11:15.090 06:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:11:15.090 06:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:15.090 06:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:15.090 06:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:11:15.090 06:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:15.090 06:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:15.090 06:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:15.090 06:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.090 06:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:15.090 06:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.090 06:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:15.090 06:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:15.090 06:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:15.348 00:11:15.348 06:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:15.348 06:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:15.348 06:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:15.605 06:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:15.605 06:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:15.605 06:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.605 06:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:15.605 06:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.605 06:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:15.605 { 00:11:15.605 "cntlid": 1, 00:11:15.605 "qid": 0, 00:11:15.605 "state": "enabled", 00:11:15.605 "thread": "nvmf_tgt_poll_group_000", 00:11:15.605 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06", 00:11:15.605 "listen_address": { 00:11:15.605 "trtype": "TCP", 00:11:15.605 "adrfam": "IPv4", 00:11:15.605 "traddr": "10.0.0.3", 00:11:15.605 "trsvcid": "4420" 00:11:15.605 }, 00:11:15.605 "peer_address": { 00:11:15.605 "trtype": "TCP", 00:11:15.605 "adrfam": "IPv4", 00:11:15.605 "traddr": "10.0.0.1", 00:11:15.605 "trsvcid": "45808" 00:11:15.605 }, 00:11:15.605 "auth": { 00:11:15.605 "state": "completed", 00:11:15.605 "digest": "sha256", 00:11:15.605 "dhgroup": "null" 00:11:15.605 } 00:11:15.605 } 00:11:15.605 ]' 00:11:15.605 06:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:15.862 06:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:15.862 06:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:15.862 06:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:11:15.862 06:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:15.862 06:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:15.862 06:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:15.862 06:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:16.119 06:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzRhZDI4NDFiZTY3NDI4YWVhYjE3NDMwZTFmNmE5YzcyYWYyODFmNDFmNjg0ZDBhKIyDNQ==: --dhchap-ctrl-secret DHHC-1:03:M2MzMzdkN2NhMzU0OWYxMjJhZjI5MWZhMjQzNjRjNTlhZTI5MjA3NzJhY2E5YjBkYWQ5OWFiODc2ZDVhNDAzMy2Hyy4=: 00:11:16.119 06:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 --hostid a979a798-a221-4879-b3c4-5aaa753fde06 -l 0 --dhchap-secret DHHC-1:00:MzRhZDI4NDFiZTY3NDI4YWVhYjE3NDMwZTFmNmE5YzcyYWYyODFmNDFmNjg0ZDBhKIyDNQ==: --dhchap-ctrl-secret DHHC-1:03:M2MzMzdkN2NhMzU0OWYxMjJhZjI5MWZhMjQzNjRjNTlhZTI5MjA3NzJhY2E5YjBkYWQ5OWFiODc2ZDVhNDAzMy2Hyy4=: 00:11:21.374 06:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:21.374 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:21.374 06:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 00:11:21.374 06:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.374 06:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:21.374 06:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.374 06:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:21.374 06:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:11:21.374 06:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:11:21.374 06:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:11:21.374 06:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:21.374 06:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:21.374 06:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:11:21.374 06:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:21.374 06:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:21.374 06:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:21.374 06:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.374 06:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:21.374 06:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.374 06:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:21.374 06:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:21.374 06:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:21.374 00:11:21.374 06:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:21.374 06:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:21.374 06:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:21.374 06:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:21.374 06:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:21.374 06:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.374 06:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:21.632 06:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.632 06:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:21.632 { 00:11:21.632 "cntlid": 3, 00:11:21.632 "qid": 0, 00:11:21.632 "state": "enabled", 00:11:21.632 "thread": "nvmf_tgt_poll_group_000", 00:11:21.632 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06", 00:11:21.632 "listen_address": { 00:11:21.632 "trtype": "TCP", 00:11:21.632 "adrfam": "IPv4", 00:11:21.632 "traddr": "10.0.0.3", 00:11:21.632 "trsvcid": "4420" 00:11:21.632 }, 00:11:21.632 "peer_address": { 00:11:21.632 "trtype": "TCP", 00:11:21.632 "adrfam": "IPv4", 00:11:21.632 "traddr": "10.0.0.1", 00:11:21.632 "trsvcid": "45818" 00:11:21.632 }, 00:11:21.632 "auth": { 00:11:21.632 "state": "completed", 00:11:21.632 "digest": "sha256", 00:11:21.632 "dhgroup": "null" 00:11:21.632 } 00:11:21.632 } 00:11:21.632 ]' 00:11:21.632 06:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:21.632 06:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:21.632 06:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:21.632 06:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:11:21.632 06:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:21.632 06:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:21.632 06:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:21.633 06:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:21.890 06:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MDQxNWY0NjIxNmZmYWNiODg3MWU4MWY0NDI1ZDEyNWMSLUvV: --dhchap-ctrl-secret DHHC-1:02:OTJiMzg0Y2FiZWJhMWQ0NmNjMzQ2ZDJhMmRmZGRhZTAzMjQ2ZmFhZWRhMDUxZjFhuFjN0Q==: 00:11:21.890 06:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 --hostid a979a798-a221-4879-b3c4-5aaa753fde06 -l 0 --dhchap-secret DHHC-1:01:MDQxNWY0NjIxNmZmYWNiODg3MWU4MWY0NDI1ZDEyNWMSLUvV: --dhchap-ctrl-secret DHHC-1:02:OTJiMzg0Y2FiZWJhMWQ0NmNjMzQ2ZDJhMmRmZGRhZTAzMjQ2ZmFhZWRhMDUxZjFhuFjN0Q==: 00:11:22.508 06:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:22.508 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:22.508 06:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 00:11:22.508 06:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.508 06:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:22.508 06:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.508 06:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:22.508 06:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:11:22.508 06:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:11:22.779 06:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:11:22.779 06:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:22.779 06:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:22.779 06:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:11:22.779 06:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:22.779 06:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:22.779 06:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:22.779 06:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.779 06:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:22.779 06:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.779 06:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:22.779 06:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:22.779 06:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:23.359 00:11:23.359 06:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:23.359 06:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:23.360 06:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:23.618 06:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:23.618 06:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:23.618 06:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:23.618 06:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:23.618 06:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:23.618 06:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:23.618 { 00:11:23.618 "cntlid": 5, 00:11:23.618 "qid": 0, 00:11:23.618 "state": "enabled", 00:11:23.618 "thread": "nvmf_tgt_poll_group_000", 00:11:23.618 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06", 00:11:23.618 "listen_address": { 00:11:23.618 "trtype": "TCP", 00:11:23.618 "adrfam": "IPv4", 00:11:23.618 "traddr": "10.0.0.3", 00:11:23.618 "trsvcid": "4420" 00:11:23.618 }, 00:11:23.618 "peer_address": { 00:11:23.618 "trtype": "TCP", 00:11:23.618 "adrfam": "IPv4", 00:11:23.618 "traddr": "10.0.0.1", 00:11:23.618 "trsvcid": "46670" 00:11:23.618 }, 00:11:23.618 "auth": { 00:11:23.618 "state": "completed", 00:11:23.618 "digest": "sha256", 00:11:23.618 "dhgroup": "null" 00:11:23.618 } 00:11:23.618 } 00:11:23.618 ]' 00:11:23.618 06:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:23.618 06:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:23.618 06:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:23.618 06:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:11:23.618 06:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:23.618 06:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:23.618 06:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:23.618 06:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:23.876 06:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ODNjNGFlNzJmMDc5Y2ViZDNhNGRhNTUyZTUyNzdhNmNiNjVjODY3Yzg0M2UxYjgwB9C67A==: --dhchap-ctrl-secret DHHC-1:01:MDBiN2FiMTE5ZGQ4MTViZjhhZDM3NDgxZDAwYzAyYjI0Vo/w: 00:11:23.876 06:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 --hostid a979a798-a221-4879-b3c4-5aaa753fde06 -l 0 --dhchap-secret DHHC-1:02:ODNjNGFlNzJmMDc5Y2ViZDNhNGRhNTUyZTUyNzdhNmNiNjVjODY3Yzg0M2UxYjgwB9C67A==: --dhchap-ctrl-secret DHHC-1:01:MDBiN2FiMTE5ZGQ4MTViZjhhZDM3NDgxZDAwYzAyYjI0Vo/w: 00:11:24.813 06:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:24.813 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:24.813 06:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 00:11:24.813 06:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.813 06:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:24.813 06:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.813 06:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:24.813 06:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:11:24.813 06:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:11:25.073 06:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:11:25.073 06:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:25.073 06:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:25.073 06:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:11:25.073 06:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:25.073 06:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:25.073 06:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 --dhchap-key key3 00:11:25.073 06:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.073 06:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:25.073 06:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.073 06:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:25.073 06:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:25.073 06:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:25.332 00:11:25.332 06:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:25.332 06:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:25.332 06:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:25.591 06:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:25.591 06:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:25.591 06:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.591 06:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:25.591 06:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.591 06:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:25.591 { 00:11:25.591 "cntlid": 7, 00:11:25.591 "qid": 0, 00:11:25.591 "state": "enabled", 00:11:25.592 "thread": "nvmf_tgt_poll_group_000", 00:11:25.592 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06", 00:11:25.592 "listen_address": { 00:11:25.592 "trtype": "TCP", 00:11:25.592 "adrfam": "IPv4", 00:11:25.592 "traddr": "10.0.0.3", 00:11:25.592 "trsvcid": "4420" 00:11:25.592 }, 00:11:25.592 "peer_address": { 00:11:25.592 "trtype": "TCP", 00:11:25.592 "adrfam": "IPv4", 00:11:25.592 "traddr": "10.0.0.1", 00:11:25.592 "trsvcid": "46682" 00:11:25.592 }, 00:11:25.592 "auth": { 00:11:25.592 "state": "completed", 00:11:25.592 "digest": "sha256", 00:11:25.592 "dhgroup": "null" 00:11:25.592 } 00:11:25.592 } 00:11:25.592 ]' 00:11:25.592 06:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:25.592 06:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:25.592 06:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:25.592 06:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:11:25.592 06:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:25.592 06:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:25.592 06:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:25.592 06:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:26.160 06:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjI4ZmRiM2QyMjVhY2RkMzZiYzllMmI0NDQ0OTFmZThiYmI1NmE0YjIxYmY2NmRmNWQyOWEwNzk5NWNiMjg0ZT7lCu4=: 00:11:26.160 06:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 --hostid a979a798-a221-4879-b3c4-5aaa753fde06 -l 0 --dhchap-secret DHHC-1:03:MjI4ZmRiM2QyMjVhY2RkMzZiYzllMmI0NDQ0OTFmZThiYmI1NmE0YjIxYmY2NmRmNWQyOWEwNzk5NWNiMjg0ZT7lCu4=: 00:11:26.728 06:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:26.728 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:26.728 06:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 00:11:26.728 06:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.728 06:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:26.728 06:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.728 06:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:26.729 06:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:26.729 06:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:26.729 06:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:26.988 06:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:11:26.988 06:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:26.988 06:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:26.988 06:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:11:26.988 06:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:26.988 06:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:26.988 06:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:26.988 06:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.988 06:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:26.988 06:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.988 06:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:26.988 06:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:26.988 06:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:27.247 00:11:27.247 06:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:27.247 06:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:27.247 06:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:27.506 06:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:27.506 06:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:27.506 06:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.506 06:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:27.506 06:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.506 06:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:27.506 { 00:11:27.506 "cntlid": 9, 00:11:27.506 "qid": 0, 00:11:27.506 "state": "enabled", 00:11:27.506 "thread": "nvmf_tgt_poll_group_000", 00:11:27.506 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06", 00:11:27.506 "listen_address": { 00:11:27.506 "trtype": "TCP", 00:11:27.506 "adrfam": "IPv4", 00:11:27.506 "traddr": "10.0.0.3", 00:11:27.506 "trsvcid": "4420" 00:11:27.506 }, 00:11:27.506 "peer_address": { 00:11:27.506 "trtype": "TCP", 00:11:27.506 "adrfam": "IPv4", 00:11:27.506 "traddr": "10.0.0.1", 00:11:27.506 "trsvcid": "46710" 00:11:27.506 }, 00:11:27.506 "auth": { 00:11:27.506 "state": "completed", 00:11:27.506 "digest": "sha256", 00:11:27.506 "dhgroup": "ffdhe2048" 00:11:27.506 } 00:11:27.506 } 00:11:27.506 ]' 00:11:27.506 06:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:27.766 06:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:27.766 06:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:27.766 06:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:27.766 06:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:27.766 06:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:27.766 06:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:27.766 06:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:28.025 06:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzRhZDI4NDFiZTY3NDI4YWVhYjE3NDMwZTFmNmE5YzcyYWYyODFmNDFmNjg0ZDBhKIyDNQ==: --dhchap-ctrl-secret DHHC-1:03:M2MzMzdkN2NhMzU0OWYxMjJhZjI5MWZhMjQzNjRjNTlhZTI5MjA3NzJhY2E5YjBkYWQ5OWFiODc2ZDVhNDAzMy2Hyy4=: 00:11:28.025 06:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 --hostid a979a798-a221-4879-b3c4-5aaa753fde06 -l 0 --dhchap-secret DHHC-1:00:MzRhZDI4NDFiZTY3NDI4YWVhYjE3NDMwZTFmNmE5YzcyYWYyODFmNDFmNjg0ZDBhKIyDNQ==: --dhchap-ctrl-secret DHHC-1:03:M2MzMzdkN2NhMzU0OWYxMjJhZjI5MWZhMjQzNjRjNTlhZTI5MjA3NzJhY2E5YjBkYWQ5OWFiODc2ZDVhNDAzMy2Hyy4=: 00:11:28.594 06:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:28.594 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:28.594 06:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 00:11:28.594 06:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.594 06:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:28.594 06:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.594 06:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:28.594 06:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:28.594 06:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:28.854 06:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:11:28.854 06:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:28.854 06:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:28.854 06:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:11:28.854 06:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:28.854 06:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:28.854 06:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:28.854 06:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.854 06:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:28.854 06:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.854 06:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:28.854 06:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:28.854 06:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:29.114 00:11:29.114 06:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:29.114 06:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:29.114 06:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:29.374 06:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:29.374 06:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:29.374 06:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.374 06:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:29.374 06:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.374 06:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:29.374 { 00:11:29.374 "cntlid": 11, 00:11:29.374 "qid": 0, 00:11:29.374 "state": "enabled", 00:11:29.374 "thread": "nvmf_tgt_poll_group_000", 00:11:29.374 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06", 00:11:29.374 "listen_address": { 00:11:29.374 "trtype": "TCP", 00:11:29.374 "adrfam": "IPv4", 00:11:29.374 "traddr": "10.0.0.3", 00:11:29.374 "trsvcid": "4420" 00:11:29.374 }, 00:11:29.374 "peer_address": { 00:11:29.374 "trtype": "TCP", 00:11:29.374 "adrfam": "IPv4", 00:11:29.374 "traddr": "10.0.0.1", 00:11:29.374 "trsvcid": "46742" 00:11:29.374 }, 00:11:29.374 "auth": { 00:11:29.374 "state": "completed", 00:11:29.374 "digest": "sha256", 00:11:29.374 "dhgroup": "ffdhe2048" 00:11:29.374 } 00:11:29.374 } 00:11:29.374 ]' 00:11:29.374 06:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:29.633 06:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:29.633 06:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:29.633 06:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:29.633 06:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:29.633 06:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:29.633 06:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:29.633 06:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:29.892 06:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MDQxNWY0NjIxNmZmYWNiODg3MWU4MWY0NDI1ZDEyNWMSLUvV: --dhchap-ctrl-secret DHHC-1:02:OTJiMzg0Y2FiZWJhMWQ0NmNjMzQ2ZDJhMmRmZGRhZTAzMjQ2ZmFhZWRhMDUxZjFhuFjN0Q==: 00:11:29.892 06:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 --hostid a979a798-a221-4879-b3c4-5aaa753fde06 -l 0 --dhchap-secret DHHC-1:01:MDQxNWY0NjIxNmZmYWNiODg3MWU4MWY0NDI1ZDEyNWMSLUvV: --dhchap-ctrl-secret DHHC-1:02:OTJiMzg0Y2FiZWJhMWQ0NmNjMzQ2ZDJhMmRmZGRhZTAzMjQ2ZmFhZWRhMDUxZjFhuFjN0Q==: 00:11:30.829 06:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:30.830 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:30.830 06:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 00:11:30.830 06:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.830 06:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:30.830 06:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.830 06:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:30.830 06:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:30.830 06:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:30.830 06:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:11:30.830 06:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:30.830 06:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:30.830 06:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:11:30.830 06:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:30.830 06:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:30.830 06:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:30.830 06:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.830 06:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:30.830 06:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.830 06:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:30.830 06:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:30.830 06:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:31.397 00:11:31.397 06:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:31.397 06:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:31.397 06:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:31.655 06:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:31.655 06:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:31.655 06:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.655 06:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:31.655 06:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.655 06:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:31.655 { 00:11:31.655 "cntlid": 13, 00:11:31.655 "qid": 0, 00:11:31.655 "state": "enabled", 00:11:31.655 "thread": "nvmf_tgt_poll_group_000", 00:11:31.655 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06", 00:11:31.655 "listen_address": { 00:11:31.655 "trtype": "TCP", 00:11:31.655 "adrfam": "IPv4", 00:11:31.655 "traddr": "10.0.0.3", 00:11:31.655 "trsvcid": "4420" 00:11:31.655 }, 00:11:31.655 "peer_address": { 00:11:31.655 "trtype": "TCP", 00:11:31.655 "adrfam": "IPv4", 00:11:31.655 "traddr": "10.0.0.1", 00:11:31.655 "trsvcid": "46764" 00:11:31.655 }, 00:11:31.655 "auth": { 00:11:31.655 "state": "completed", 00:11:31.655 "digest": "sha256", 00:11:31.655 "dhgroup": "ffdhe2048" 00:11:31.655 } 00:11:31.655 } 00:11:31.655 ]' 00:11:31.655 06:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:31.655 06:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:31.655 06:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:31.655 06:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:31.655 06:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:31.655 06:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:31.655 06:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:31.655 06:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:31.913 06:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ODNjNGFlNzJmMDc5Y2ViZDNhNGRhNTUyZTUyNzdhNmNiNjVjODY3Yzg0M2UxYjgwB9C67A==: --dhchap-ctrl-secret DHHC-1:01:MDBiN2FiMTE5ZGQ4MTViZjhhZDM3NDgxZDAwYzAyYjI0Vo/w: 00:11:31.913 06:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 --hostid a979a798-a221-4879-b3c4-5aaa753fde06 -l 0 --dhchap-secret DHHC-1:02:ODNjNGFlNzJmMDc5Y2ViZDNhNGRhNTUyZTUyNzdhNmNiNjVjODY3Yzg0M2UxYjgwB9C67A==: --dhchap-ctrl-secret DHHC-1:01:MDBiN2FiMTE5ZGQ4MTViZjhhZDM3NDgxZDAwYzAyYjI0Vo/w: 00:11:32.480 06:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:32.480 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:32.480 06:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 00:11:32.480 06:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.480 06:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:32.480 06:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.480 06:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:32.480 06:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:32.480 06:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:11:32.738 06:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:11:32.738 06:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:32.738 06:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:32.738 06:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:11:32.738 06:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:32.738 06:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:32.739 06:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 --dhchap-key key3 00:11:32.739 06:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.739 06:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:32.739 06:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.739 06:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:32.739 06:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:32.739 06:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:33.305 00:11:33.305 06:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:33.305 06:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:33.305 06:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:33.305 06:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:33.305 06:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:33.305 06:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.305 06:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:33.305 06:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.305 06:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:33.305 { 00:11:33.305 "cntlid": 15, 00:11:33.305 "qid": 0, 00:11:33.305 "state": "enabled", 00:11:33.305 "thread": "nvmf_tgt_poll_group_000", 00:11:33.305 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06", 00:11:33.305 "listen_address": { 00:11:33.305 "trtype": "TCP", 00:11:33.305 "adrfam": "IPv4", 00:11:33.305 "traddr": "10.0.0.3", 00:11:33.305 "trsvcid": "4420" 00:11:33.305 }, 00:11:33.305 "peer_address": { 00:11:33.305 "trtype": "TCP", 00:11:33.305 "adrfam": "IPv4", 00:11:33.305 "traddr": "10.0.0.1", 00:11:33.305 "trsvcid": "51780" 00:11:33.305 }, 00:11:33.305 "auth": { 00:11:33.305 "state": "completed", 00:11:33.305 "digest": "sha256", 00:11:33.305 "dhgroup": "ffdhe2048" 00:11:33.305 } 00:11:33.305 } 00:11:33.305 ]' 00:11:33.305 06:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:33.562 06:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:33.562 06:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:33.562 06:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:11:33.562 06:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:33.562 06:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:33.562 06:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:33.562 06:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:33.820 06:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjI4ZmRiM2QyMjVhY2RkMzZiYzllMmI0NDQ0OTFmZThiYmI1NmE0YjIxYmY2NmRmNWQyOWEwNzk5NWNiMjg0ZT7lCu4=: 00:11:33.820 06:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 --hostid a979a798-a221-4879-b3c4-5aaa753fde06 -l 0 --dhchap-secret DHHC-1:03:MjI4ZmRiM2QyMjVhY2RkMzZiYzllMmI0NDQ0OTFmZThiYmI1NmE0YjIxYmY2NmRmNWQyOWEwNzk5NWNiMjg0ZT7lCu4=: 00:11:34.388 06:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:34.388 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:34.388 06:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 00:11:34.388 06:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.388 06:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:34.388 06:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.388 06:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:34.388 06:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:34.388 06:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:34.388 06:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:34.648 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:11:34.648 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:34.648 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:34.648 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:11:34.648 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:34.648 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:34.648 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:34.648 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.648 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:34.648 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.648 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:34.648 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:34.648 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:35.217 00:11:35.217 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:35.217 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:35.217 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:35.476 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:35.476 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:35.476 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.476 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:35.476 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.476 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:35.476 { 00:11:35.476 "cntlid": 17, 00:11:35.476 "qid": 0, 00:11:35.476 "state": "enabled", 00:11:35.476 "thread": "nvmf_tgt_poll_group_000", 00:11:35.476 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06", 00:11:35.476 "listen_address": { 00:11:35.476 "trtype": "TCP", 00:11:35.476 "adrfam": "IPv4", 00:11:35.476 "traddr": "10.0.0.3", 00:11:35.476 "trsvcid": "4420" 00:11:35.476 }, 00:11:35.476 "peer_address": { 00:11:35.476 "trtype": "TCP", 00:11:35.476 "adrfam": "IPv4", 00:11:35.476 "traddr": "10.0.0.1", 00:11:35.476 "trsvcid": "51790" 00:11:35.476 }, 00:11:35.476 "auth": { 00:11:35.476 "state": "completed", 00:11:35.476 "digest": "sha256", 00:11:35.476 "dhgroup": "ffdhe3072" 00:11:35.476 } 00:11:35.476 } 00:11:35.476 ]' 00:11:35.476 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:35.476 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:35.476 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:35.476 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:35.476 06:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:35.476 06:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:35.476 06:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:35.476 06:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:35.736 06:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzRhZDI4NDFiZTY3NDI4YWVhYjE3NDMwZTFmNmE5YzcyYWYyODFmNDFmNjg0ZDBhKIyDNQ==: --dhchap-ctrl-secret DHHC-1:03:M2MzMzdkN2NhMzU0OWYxMjJhZjI5MWZhMjQzNjRjNTlhZTI5MjA3NzJhY2E5YjBkYWQ5OWFiODc2ZDVhNDAzMy2Hyy4=: 00:11:35.736 06:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 --hostid a979a798-a221-4879-b3c4-5aaa753fde06 -l 0 --dhchap-secret DHHC-1:00:MzRhZDI4NDFiZTY3NDI4YWVhYjE3NDMwZTFmNmE5YzcyYWYyODFmNDFmNjg0ZDBhKIyDNQ==: --dhchap-ctrl-secret DHHC-1:03:M2MzMzdkN2NhMzU0OWYxMjJhZjI5MWZhMjQzNjRjNTlhZTI5MjA3NzJhY2E5YjBkYWQ5OWFiODc2ZDVhNDAzMy2Hyy4=: 00:11:36.683 06:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:36.683 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:36.683 06:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 00:11:36.683 06:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.683 06:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:36.683 06:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.684 06:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:36.684 06:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:36.684 06:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:36.684 06:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:11:36.684 06:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:36.684 06:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:36.684 06:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:11:36.955 06:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:36.955 06:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:36.955 06:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:36.955 06:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.955 06:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:36.955 06:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.955 06:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:36.955 06:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:36.955 06:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:37.213 00:11:37.213 06:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:37.213 06:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:37.213 06:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:37.472 06:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:37.472 06:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:37.472 06:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.472 06:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:37.472 06:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.472 06:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:37.472 { 00:11:37.472 "cntlid": 19, 00:11:37.472 "qid": 0, 00:11:37.472 "state": "enabled", 00:11:37.472 "thread": "nvmf_tgt_poll_group_000", 00:11:37.472 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06", 00:11:37.472 "listen_address": { 00:11:37.472 "trtype": "TCP", 00:11:37.472 "adrfam": "IPv4", 00:11:37.472 "traddr": "10.0.0.3", 00:11:37.472 "trsvcid": "4420" 00:11:37.472 }, 00:11:37.472 "peer_address": { 00:11:37.472 "trtype": "TCP", 00:11:37.472 "adrfam": "IPv4", 00:11:37.472 "traddr": "10.0.0.1", 00:11:37.472 "trsvcid": "51822" 00:11:37.472 }, 00:11:37.472 "auth": { 00:11:37.472 "state": "completed", 00:11:37.472 "digest": "sha256", 00:11:37.472 "dhgroup": "ffdhe3072" 00:11:37.472 } 00:11:37.472 } 00:11:37.472 ]' 00:11:37.472 06:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:37.472 06:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:37.472 06:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:37.472 06:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:37.472 06:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:37.472 06:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:37.472 06:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:37.472 06:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:37.732 06:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MDQxNWY0NjIxNmZmYWNiODg3MWU4MWY0NDI1ZDEyNWMSLUvV: --dhchap-ctrl-secret DHHC-1:02:OTJiMzg0Y2FiZWJhMWQ0NmNjMzQ2ZDJhMmRmZGRhZTAzMjQ2ZmFhZWRhMDUxZjFhuFjN0Q==: 00:11:37.732 06:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 --hostid a979a798-a221-4879-b3c4-5aaa753fde06 -l 0 --dhchap-secret DHHC-1:01:MDQxNWY0NjIxNmZmYWNiODg3MWU4MWY0NDI1ZDEyNWMSLUvV: --dhchap-ctrl-secret DHHC-1:02:OTJiMzg0Y2FiZWJhMWQ0NmNjMzQ2ZDJhMmRmZGRhZTAzMjQ2ZmFhZWRhMDUxZjFhuFjN0Q==: 00:11:38.670 06:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:38.670 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:38.670 06:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 00:11:38.670 06:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.670 06:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:38.670 06:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.670 06:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:38.670 06:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:38.670 06:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:38.929 06:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:11:38.929 06:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:38.929 06:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:38.929 06:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:11:38.929 06:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:38.929 06:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:38.929 06:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:38.929 06:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.929 06:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:38.929 06:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.929 06:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:38.929 06:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:38.929 06:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:39.189 00:11:39.189 06:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:39.189 06:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:39.189 06:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:39.448 06:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:39.448 06:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:39.448 06:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:39.448 06:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:39.448 06:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.448 06:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:39.448 { 00:11:39.448 "cntlid": 21, 00:11:39.448 "qid": 0, 00:11:39.448 "state": "enabled", 00:11:39.448 "thread": "nvmf_tgt_poll_group_000", 00:11:39.448 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06", 00:11:39.448 "listen_address": { 00:11:39.448 "trtype": "TCP", 00:11:39.448 "adrfam": "IPv4", 00:11:39.448 "traddr": "10.0.0.3", 00:11:39.448 "trsvcid": "4420" 00:11:39.448 }, 00:11:39.448 "peer_address": { 00:11:39.448 "trtype": "TCP", 00:11:39.448 "adrfam": "IPv4", 00:11:39.448 "traddr": "10.0.0.1", 00:11:39.448 "trsvcid": "51856" 00:11:39.448 }, 00:11:39.448 "auth": { 00:11:39.448 "state": "completed", 00:11:39.448 "digest": "sha256", 00:11:39.448 "dhgroup": "ffdhe3072" 00:11:39.448 } 00:11:39.448 } 00:11:39.448 ]' 00:11:39.448 06:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:39.448 06:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:39.448 06:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:39.448 06:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:39.448 06:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:39.448 06:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:39.448 06:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:39.448 06:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:40.017 06:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ODNjNGFlNzJmMDc5Y2ViZDNhNGRhNTUyZTUyNzdhNmNiNjVjODY3Yzg0M2UxYjgwB9C67A==: --dhchap-ctrl-secret DHHC-1:01:MDBiN2FiMTE5ZGQ4MTViZjhhZDM3NDgxZDAwYzAyYjI0Vo/w: 00:11:40.017 06:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 --hostid a979a798-a221-4879-b3c4-5aaa753fde06 -l 0 --dhchap-secret DHHC-1:02:ODNjNGFlNzJmMDc5Y2ViZDNhNGRhNTUyZTUyNzdhNmNiNjVjODY3Yzg0M2UxYjgwB9C67A==: --dhchap-ctrl-secret DHHC-1:01:MDBiN2FiMTE5ZGQ4MTViZjhhZDM3NDgxZDAwYzAyYjI0Vo/w: 00:11:40.586 06:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:40.586 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:40.586 06:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 00:11:40.586 06:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.586 06:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:40.586 06:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.586 06:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:40.586 06:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:40.586 06:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:11:40.845 06:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:11:40.845 06:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:40.845 06:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:40.845 06:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:11:40.845 06:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:40.845 06:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:40.845 06:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 --dhchap-key key3 00:11:40.845 06:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:40.845 06:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:40.845 06:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:40.845 06:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:40.845 06:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:40.845 06:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:41.104 00:11:41.104 06:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:41.104 06:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:41.104 06:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:41.363 06:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:41.363 06:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:41.363 06:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.363 06:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:41.363 06:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.622 06:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:41.622 { 00:11:41.622 "cntlid": 23, 00:11:41.622 "qid": 0, 00:11:41.622 "state": "enabled", 00:11:41.622 "thread": "nvmf_tgt_poll_group_000", 00:11:41.622 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06", 00:11:41.622 "listen_address": { 00:11:41.622 "trtype": "TCP", 00:11:41.622 "adrfam": "IPv4", 00:11:41.622 "traddr": "10.0.0.3", 00:11:41.622 "trsvcid": "4420" 00:11:41.622 }, 00:11:41.622 "peer_address": { 00:11:41.622 "trtype": "TCP", 00:11:41.622 "adrfam": "IPv4", 00:11:41.622 "traddr": "10.0.0.1", 00:11:41.622 "trsvcid": "51894" 00:11:41.622 }, 00:11:41.622 "auth": { 00:11:41.622 "state": "completed", 00:11:41.622 "digest": "sha256", 00:11:41.622 "dhgroup": "ffdhe3072" 00:11:41.622 } 00:11:41.622 } 00:11:41.622 ]' 00:11:41.622 06:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:41.622 06:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:41.622 06:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:41.622 06:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:11:41.622 06:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:41.622 06:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:41.622 06:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:41.622 06:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:41.882 06:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjI4ZmRiM2QyMjVhY2RkMzZiYzllMmI0NDQ0OTFmZThiYmI1NmE0YjIxYmY2NmRmNWQyOWEwNzk5NWNiMjg0ZT7lCu4=: 00:11:41.882 06:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 --hostid a979a798-a221-4879-b3c4-5aaa753fde06 -l 0 --dhchap-secret DHHC-1:03:MjI4ZmRiM2QyMjVhY2RkMzZiYzllMmI0NDQ0OTFmZThiYmI1NmE0YjIxYmY2NmRmNWQyOWEwNzk5NWNiMjg0ZT7lCu4=: 00:11:42.450 06:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:42.709 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:42.709 06:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 00:11:42.709 06:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.709 06:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:42.709 06:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.709 06:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:42.709 06:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:42.709 06:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:11:42.709 06:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:11:42.968 06:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:11:42.968 06:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:42.968 06:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:42.968 06:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:11:42.968 06:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:42.968 06:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:42.968 06:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:42.968 06:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.968 06:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:42.968 06:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.968 06:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:42.968 06:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:42.968 06:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:43.227 00:11:43.227 06:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:43.227 06:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:43.227 06:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:43.486 06:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:43.486 06:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:43.486 06:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.486 06:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:43.486 06:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.745 06:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:43.745 { 00:11:43.745 "cntlid": 25, 00:11:43.745 "qid": 0, 00:11:43.745 "state": "enabled", 00:11:43.745 "thread": "nvmf_tgt_poll_group_000", 00:11:43.745 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06", 00:11:43.745 "listen_address": { 00:11:43.745 "trtype": "TCP", 00:11:43.745 "adrfam": "IPv4", 00:11:43.745 "traddr": "10.0.0.3", 00:11:43.745 "trsvcid": "4420" 00:11:43.745 }, 00:11:43.745 "peer_address": { 00:11:43.745 "trtype": "TCP", 00:11:43.745 "adrfam": "IPv4", 00:11:43.745 "traddr": "10.0.0.1", 00:11:43.745 "trsvcid": "33586" 00:11:43.745 }, 00:11:43.745 "auth": { 00:11:43.745 "state": "completed", 00:11:43.745 "digest": "sha256", 00:11:43.745 "dhgroup": "ffdhe4096" 00:11:43.745 } 00:11:43.745 } 00:11:43.745 ]' 00:11:43.745 06:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:43.745 06:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:43.745 06:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:43.745 06:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:43.745 06:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:43.745 06:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:43.745 06:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:43.745 06:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:44.003 06:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzRhZDI4NDFiZTY3NDI4YWVhYjE3NDMwZTFmNmE5YzcyYWYyODFmNDFmNjg0ZDBhKIyDNQ==: --dhchap-ctrl-secret DHHC-1:03:M2MzMzdkN2NhMzU0OWYxMjJhZjI5MWZhMjQzNjRjNTlhZTI5MjA3NzJhY2E5YjBkYWQ5OWFiODc2ZDVhNDAzMy2Hyy4=: 00:11:44.003 06:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 --hostid a979a798-a221-4879-b3c4-5aaa753fde06 -l 0 --dhchap-secret DHHC-1:00:MzRhZDI4NDFiZTY3NDI4YWVhYjE3NDMwZTFmNmE5YzcyYWYyODFmNDFmNjg0ZDBhKIyDNQ==: --dhchap-ctrl-secret DHHC-1:03:M2MzMzdkN2NhMzU0OWYxMjJhZjI5MWZhMjQzNjRjNTlhZTI5MjA3NzJhY2E5YjBkYWQ5OWFiODc2ZDVhNDAzMy2Hyy4=: 00:11:44.570 06:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:44.570 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:44.571 06:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 00:11:44.571 06:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.571 06:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:44.571 06:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.571 06:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:44.571 06:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:11:44.571 06:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:11:44.830 06:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:11:44.830 06:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:44.830 06:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:44.830 06:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:11:44.830 06:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:44.830 06:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:44.830 06:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:44.830 06:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.830 06:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:44.830 06:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.830 06:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:44.830 06:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:44.830 06:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:45.398 00:11:45.398 06:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:45.398 06:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:45.398 06:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:45.656 06:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:45.656 06:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:45.656 06:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.656 06:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:45.656 06:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.656 06:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:45.656 { 00:11:45.656 "cntlid": 27, 00:11:45.656 "qid": 0, 00:11:45.656 "state": "enabled", 00:11:45.656 "thread": "nvmf_tgt_poll_group_000", 00:11:45.656 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06", 00:11:45.656 "listen_address": { 00:11:45.656 "trtype": "TCP", 00:11:45.656 "adrfam": "IPv4", 00:11:45.656 "traddr": "10.0.0.3", 00:11:45.656 "trsvcid": "4420" 00:11:45.656 }, 00:11:45.656 "peer_address": { 00:11:45.656 "trtype": "TCP", 00:11:45.656 "adrfam": "IPv4", 00:11:45.656 "traddr": "10.0.0.1", 00:11:45.656 "trsvcid": "33628" 00:11:45.656 }, 00:11:45.656 "auth": { 00:11:45.656 "state": "completed", 00:11:45.656 "digest": "sha256", 00:11:45.656 "dhgroup": "ffdhe4096" 00:11:45.656 } 00:11:45.656 } 00:11:45.656 ]' 00:11:45.656 06:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:45.656 06:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:45.656 06:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:45.656 06:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:45.656 06:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:45.656 06:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:45.656 06:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:45.656 06:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:45.916 06:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MDQxNWY0NjIxNmZmYWNiODg3MWU4MWY0NDI1ZDEyNWMSLUvV: --dhchap-ctrl-secret DHHC-1:02:OTJiMzg0Y2FiZWJhMWQ0NmNjMzQ2ZDJhMmRmZGRhZTAzMjQ2ZmFhZWRhMDUxZjFhuFjN0Q==: 00:11:45.916 06:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 --hostid a979a798-a221-4879-b3c4-5aaa753fde06 -l 0 --dhchap-secret DHHC-1:01:MDQxNWY0NjIxNmZmYWNiODg3MWU4MWY0NDI1ZDEyNWMSLUvV: --dhchap-ctrl-secret DHHC-1:02:OTJiMzg0Y2FiZWJhMWQ0NmNjMzQ2ZDJhMmRmZGRhZTAzMjQ2ZmFhZWRhMDUxZjFhuFjN0Q==: 00:11:46.849 06:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:46.849 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:46.849 06:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 00:11:46.849 06:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.849 06:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:46.849 06:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.849 06:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:46.849 06:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:11:46.849 06:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:11:47.108 06:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:11:47.108 06:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:47.108 06:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:47.108 06:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:11:47.108 06:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:47.108 06:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:47.108 06:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:47.108 06:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.108 06:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:47.108 06:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.108 06:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:47.108 06:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:47.108 06:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:47.367 00:11:47.367 06:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:47.367 06:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:47.367 06:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:47.625 06:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:47.625 06:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:47.625 06:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.625 06:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:47.625 06:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.625 06:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:47.625 { 00:11:47.625 "cntlid": 29, 00:11:47.625 "qid": 0, 00:11:47.625 "state": "enabled", 00:11:47.625 "thread": "nvmf_tgt_poll_group_000", 00:11:47.625 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06", 00:11:47.625 "listen_address": { 00:11:47.625 "trtype": "TCP", 00:11:47.625 "adrfam": "IPv4", 00:11:47.625 "traddr": "10.0.0.3", 00:11:47.625 "trsvcid": "4420" 00:11:47.625 }, 00:11:47.625 "peer_address": { 00:11:47.625 "trtype": "TCP", 00:11:47.625 "adrfam": "IPv4", 00:11:47.625 "traddr": "10.0.0.1", 00:11:47.625 "trsvcid": "33648" 00:11:47.625 }, 00:11:47.625 "auth": { 00:11:47.625 "state": "completed", 00:11:47.625 "digest": "sha256", 00:11:47.625 "dhgroup": "ffdhe4096" 00:11:47.625 } 00:11:47.625 } 00:11:47.625 ]' 00:11:47.625 06:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:47.884 06:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:47.884 06:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:47.884 06:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:47.884 06:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:47.884 06:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:47.884 06:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:47.884 06:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:48.143 06:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ODNjNGFlNzJmMDc5Y2ViZDNhNGRhNTUyZTUyNzdhNmNiNjVjODY3Yzg0M2UxYjgwB9C67A==: --dhchap-ctrl-secret DHHC-1:01:MDBiN2FiMTE5ZGQ4MTViZjhhZDM3NDgxZDAwYzAyYjI0Vo/w: 00:11:48.143 06:04:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 --hostid a979a798-a221-4879-b3c4-5aaa753fde06 -l 0 --dhchap-secret DHHC-1:02:ODNjNGFlNzJmMDc5Y2ViZDNhNGRhNTUyZTUyNzdhNmNiNjVjODY3Yzg0M2UxYjgwB9C67A==: --dhchap-ctrl-secret DHHC-1:01:MDBiN2FiMTE5ZGQ4MTViZjhhZDM3NDgxZDAwYzAyYjI0Vo/w: 00:11:48.783 06:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:48.783 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:48.783 06:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 00:11:48.783 06:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.783 06:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:48.783 06:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.783 06:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:48.783 06:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:11:48.783 06:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:11:49.043 06:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:11:49.043 06:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:49.043 06:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:49.043 06:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:11:49.043 06:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:49.043 06:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:49.043 06:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 --dhchap-key key3 00:11:49.043 06:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.043 06:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:49.043 06:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.043 06:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:49.043 06:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:49.043 06:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:49.302 00:11:49.302 06:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:49.302 06:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:49.302 06:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:49.869 06:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:49.869 06:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:49.869 06:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.869 06:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:49.869 06:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.869 06:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:49.869 { 00:11:49.869 "cntlid": 31, 00:11:49.869 "qid": 0, 00:11:49.869 "state": "enabled", 00:11:49.869 "thread": "nvmf_tgt_poll_group_000", 00:11:49.869 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06", 00:11:49.869 "listen_address": { 00:11:49.869 "trtype": "TCP", 00:11:49.869 "adrfam": "IPv4", 00:11:49.869 "traddr": "10.0.0.3", 00:11:49.869 "trsvcid": "4420" 00:11:49.869 }, 00:11:49.869 "peer_address": { 00:11:49.869 "trtype": "TCP", 00:11:49.869 "adrfam": "IPv4", 00:11:49.869 "traddr": "10.0.0.1", 00:11:49.869 "trsvcid": "33676" 00:11:49.869 }, 00:11:49.869 "auth": { 00:11:49.869 "state": "completed", 00:11:49.869 "digest": "sha256", 00:11:49.869 "dhgroup": "ffdhe4096" 00:11:49.869 } 00:11:49.869 } 00:11:49.869 ]' 00:11:49.869 06:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:49.869 06:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:49.869 06:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:49.869 06:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:11:49.869 06:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:49.869 06:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:49.869 06:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:49.869 06:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:50.129 06:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjI4ZmRiM2QyMjVhY2RkMzZiYzllMmI0NDQ0OTFmZThiYmI1NmE0YjIxYmY2NmRmNWQyOWEwNzk5NWNiMjg0ZT7lCu4=: 00:11:50.129 06:04:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 --hostid a979a798-a221-4879-b3c4-5aaa753fde06 -l 0 --dhchap-secret DHHC-1:03:MjI4ZmRiM2QyMjVhY2RkMzZiYzllMmI0NDQ0OTFmZThiYmI1NmE0YjIxYmY2NmRmNWQyOWEwNzk5NWNiMjg0ZT7lCu4=: 00:11:51.065 06:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:51.065 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:51.065 06:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 00:11:51.065 06:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.065 06:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:51.065 06:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.065 06:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:51.065 06:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:51.065 06:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:51.065 06:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:51.065 06:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:11:51.065 06:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:51.065 06:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:51.065 06:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:11:51.066 06:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:51.066 06:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:51.066 06:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:51.066 06:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.066 06:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:51.066 06:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.066 06:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:51.066 06:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:51.066 06:04:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:51.633 00:11:51.633 06:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:51.633 06:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:51.633 06:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:51.891 06:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:51.891 06:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:51.891 06:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.891 06:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:51.891 06:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.891 06:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:51.891 { 00:11:51.891 "cntlid": 33, 00:11:51.891 "qid": 0, 00:11:51.891 "state": "enabled", 00:11:51.891 "thread": "nvmf_tgt_poll_group_000", 00:11:51.891 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06", 00:11:51.891 "listen_address": { 00:11:51.891 "trtype": "TCP", 00:11:51.891 "adrfam": "IPv4", 00:11:51.891 "traddr": "10.0.0.3", 00:11:51.891 "trsvcid": "4420" 00:11:51.891 }, 00:11:51.891 "peer_address": { 00:11:51.891 "trtype": "TCP", 00:11:51.891 "adrfam": "IPv4", 00:11:51.891 "traddr": "10.0.0.1", 00:11:51.891 "trsvcid": "33690" 00:11:51.891 }, 00:11:51.891 "auth": { 00:11:51.891 "state": "completed", 00:11:51.891 "digest": "sha256", 00:11:51.891 "dhgroup": "ffdhe6144" 00:11:51.891 } 00:11:51.891 } 00:11:51.891 ]' 00:11:51.891 06:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:51.891 06:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:51.891 06:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:51.891 06:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:51.891 06:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:52.150 06:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:52.150 06:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:52.150 06:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:52.408 06:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzRhZDI4NDFiZTY3NDI4YWVhYjE3NDMwZTFmNmE5YzcyYWYyODFmNDFmNjg0ZDBhKIyDNQ==: --dhchap-ctrl-secret DHHC-1:03:M2MzMzdkN2NhMzU0OWYxMjJhZjI5MWZhMjQzNjRjNTlhZTI5MjA3NzJhY2E5YjBkYWQ5OWFiODc2ZDVhNDAzMy2Hyy4=: 00:11:52.409 06:04:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 --hostid a979a798-a221-4879-b3c4-5aaa753fde06 -l 0 --dhchap-secret DHHC-1:00:MzRhZDI4NDFiZTY3NDI4YWVhYjE3NDMwZTFmNmE5YzcyYWYyODFmNDFmNjg0ZDBhKIyDNQ==: --dhchap-ctrl-secret DHHC-1:03:M2MzMzdkN2NhMzU0OWYxMjJhZjI5MWZhMjQzNjRjNTlhZTI5MjA3NzJhY2E5YjBkYWQ5OWFiODc2ZDVhNDAzMy2Hyy4=: 00:11:52.976 06:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:52.976 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:52.976 06:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 00:11:52.976 06:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:52.976 06:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:52.976 06:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:52.976 06:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:52.976 06:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:52.976 06:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:53.235 06:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:11:53.235 06:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:53.235 06:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:53.235 06:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:11:53.235 06:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:11:53.235 06:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:53.235 06:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:53.235 06:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.235 06:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:53.235 06:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:53.235 06:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:53.235 06:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:53.235 06:04:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:11:53.501 00:11:53.501 06:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:53.501 06:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:53.501 06:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:53.760 06:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:53.760 06:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:53.760 06:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:53.760 06:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:54.019 06:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.019 06:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:54.019 { 00:11:54.019 "cntlid": 35, 00:11:54.019 "qid": 0, 00:11:54.019 "state": "enabled", 00:11:54.019 "thread": "nvmf_tgt_poll_group_000", 00:11:54.019 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06", 00:11:54.019 "listen_address": { 00:11:54.019 "trtype": "TCP", 00:11:54.019 "adrfam": "IPv4", 00:11:54.019 "traddr": "10.0.0.3", 00:11:54.019 "trsvcid": "4420" 00:11:54.019 }, 00:11:54.019 "peer_address": { 00:11:54.019 "trtype": "TCP", 00:11:54.019 "adrfam": "IPv4", 00:11:54.019 "traddr": "10.0.0.1", 00:11:54.019 "trsvcid": "55992" 00:11:54.019 }, 00:11:54.019 "auth": { 00:11:54.019 "state": "completed", 00:11:54.019 "digest": "sha256", 00:11:54.019 "dhgroup": "ffdhe6144" 00:11:54.019 } 00:11:54.019 } 00:11:54.019 ]' 00:11:54.019 06:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:54.019 06:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:54.019 06:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:54.019 06:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:54.019 06:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:54.019 06:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:54.019 06:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:54.019 06:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:54.279 06:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MDQxNWY0NjIxNmZmYWNiODg3MWU4MWY0NDI1ZDEyNWMSLUvV: --dhchap-ctrl-secret DHHC-1:02:OTJiMzg0Y2FiZWJhMWQ0NmNjMzQ2ZDJhMmRmZGRhZTAzMjQ2ZmFhZWRhMDUxZjFhuFjN0Q==: 00:11:54.279 06:04:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 --hostid a979a798-a221-4879-b3c4-5aaa753fde06 -l 0 --dhchap-secret DHHC-1:01:MDQxNWY0NjIxNmZmYWNiODg3MWU4MWY0NDI1ZDEyNWMSLUvV: --dhchap-ctrl-secret DHHC-1:02:OTJiMzg0Y2FiZWJhMWQ0NmNjMzQ2ZDJhMmRmZGRhZTAzMjQ2ZmFhZWRhMDUxZjFhuFjN0Q==: 00:11:55.215 06:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:55.215 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:55.215 06:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 00:11:55.215 06:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.215 06:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:55.215 06:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.215 06:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:55.215 06:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:55.215 06:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:55.215 06:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:11:55.215 06:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:55.215 06:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:55.215 06:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:11:55.215 06:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:11:55.215 06:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:55.215 06:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:55.215 06:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.215 06:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:55.215 06:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.215 06:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:55.215 06:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:55.215 06:04:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:11:55.781 00:11:55.781 06:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:55.781 06:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:55.781 06:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:56.040 06:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:56.040 06:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:56.040 06:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.040 06:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:56.040 06:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.040 06:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:56.040 { 00:11:56.040 "cntlid": 37, 00:11:56.040 "qid": 0, 00:11:56.040 "state": "enabled", 00:11:56.040 "thread": "nvmf_tgt_poll_group_000", 00:11:56.040 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06", 00:11:56.040 "listen_address": { 00:11:56.040 "trtype": "TCP", 00:11:56.041 "adrfam": "IPv4", 00:11:56.041 "traddr": "10.0.0.3", 00:11:56.041 "trsvcid": "4420" 00:11:56.041 }, 00:11:56.041 "peer_address": { 00:11:56.041 "trtype": "TCP", 00:11:56.041 "adrfam": "IPv4", 00:11:56.041 "traddr": "10.0.0.1", 00:11:56.041 "trsvcid": "56024" 00:11:56.041 }, 00:11:56.041 "auth": { 00:11:56.041 "state": "completed", 00:11:56.041 "digest": "sha256", 00:11:56.041 "dhgroup": "ffdhe6144" 00:11:56.041 } 00:11:56.041 } 00:11:56.041 ]' 00:11:56.041 06:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:56.041 06:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:56.041 06:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:56.041 06:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:56.041 06:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:56.300 06:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:56.300 06:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:56.300 06:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:56.559 06:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ODNjNGFlNzJmMDc5Y2ViZDNhNGRhNTUyZTUyNzdhNmNiNjVjODY3Yzg0M2UxYjgwB9C67A==: --dhchap-ctrl-secret DHHC-1:01:MDBiN2FiMTE5ZGQ4MTViZjhhZDM3NDgxZDAwYzAyYjI0Vo/w: 00:11:56.559 06:04:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 --hostid a979a798-a221-4879-b3c4-5aaa753fde06 -l 0 --dhchap-secret DHHC-1:02:ODNjNGFlNzJmMDc5Y2ViZDNhNGRhNTUyZTUyNzdhNmNiNjVjODY3Yzg0M2UxYjgwB9C67A==: --dhchap-ctrl-secret DHHC-1:01:MDBiN2FiMTE5ZGQ4MTViZjhhZDM3NDgxZDAwYzAyYjI0Vo/w: 00:11:57.127 06:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:57.127 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:57.127 06:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 00:11:57.127 06:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.127 06:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:57.127 06:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.127 06:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:57.127 06:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:57.127 06:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:11:57.386 06:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:11:57.386 06:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:57.386 06:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:57.386 06:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:11:57.386 06:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:11:57.386 06:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:57.386 06:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 --dhchap-key key3 00:11:57.386 06:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.386 06:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:57.386 06:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.386 06:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:11:57.386 06:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:57.386 06:04:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:11:57.672 00:11:57.672 06:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:57.672 06:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:57.672 06:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:11:57.931 06:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:11:57.931 06:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:11:57.931 06:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.931 06:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:57.931 06:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.931 06:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:11:57.931 { 00:11:57.931 "cntlid": 39, 00:11:57.931 "qid": 0, 00:11:57.931 "state": "enabled", 00:11:57.931 "thread": "nvmf_tgt_poll_group_000", 00:11:57.931 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06", 00:11:57.931 "listen_address": { 00:11:57.931 "trtype": "TCP", 00:11:57.931 "adrfam": "IPv4", 00:11:57.931 "traddr": "10.0.0.3", 00:11:57.931 "trsvcid": "4420" 00:11:57.931 }, 00:11:57.931 "peer_address": { 00:11:57.931 "trtype": "TCP", 00:11:57.931 "adrfam": "IPv4", 00:11:57.931 "traddr": "10.0.0.1", 00:11:57.931 "trsvcid": "56032" 00:11:57.931 }, 00:11:57.931 "auth": { 00:11:57.931 "state": "completed", 00:11:57.931 "digest": "sha256", 00:11:57.931 "dhgroup": "ffdhe6144" 00:11:57.931 } 00:11:57.931 } 00:11:57.931 ]' 00:11:57.931 06:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:11:57.931 06:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:11:57.931 06:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:11:58.190 06:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:11:58.190 06:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:11:58.190 06:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:11:58.190 06:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:11:58.190 06:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:11:58.449 06:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjI4ZmRiM2QyMjVhY2RkMzZiYzllMmI0NDQ0OTFmZThiYmI1NmE0YjIxYmY2NmRmNWQyOWEwNzk5NWNiMjg0ZT7lCu4=: 00:11:58.449 06:04:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 --hostid a979a798-a221-4879-b3c4-5aaa753fde06 -l 0 --dhchap-secret DHHC-1:03:MjI4ZmRiM2QyMjVhY2RkMzZiYzllMmI0NDQ0OTFmZThiYmI1NmE0YjIxYmY2NmRmNWQyOWEwNzk5NWNiMjg0ZT7lCu4=: 00:11:59.018 06:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:11:59.018 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:11:59.018 06:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 00:11:59.018 06:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.018 06:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:59.018 06:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.018 06:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:11:59.018 06:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:11:59.018 06:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:59.018 06:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:11:59.278 06:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:11:59.278 06:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:11:59.278 06:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:11:59.278 06:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:11:59.278 06:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:11:59.278 06:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:11:59.278 06:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:59.278 06:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.278 06:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:11:59.278 06:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.278 06:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:59.278 06:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:59.278 06:04:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:11:59.845 00:11:59.845 06:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:11:59.845 06:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:11:59.845 06:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:00.471 06:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:00.471 06:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:00.471 06:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.471 06:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:00.471 06:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.471 06:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:00.471 { 00:12:00.471 "cntlid": 41, 00:12:00.471 "qid": 0, 00:12:00.471 "state": "enabled", 00:12:00.471 "thread": "nvmf_tgt_poll_group_000", 00:12:00.471 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06", 00:12:00.471 "listen_address": { 00:12:00.471 "trtype": "TCP", 00:12:00.471 "adrfam": "IPv4", 00:12:00.471 "traddr": "10.0.0.3", 00:12:00.471 "trsvcid": "4420" 00:12:00.471 }, 00:12:00.471 "peer_address": { 00:12:00.471 "trtype": "TCP", 00:12:00.471 "adrfam": "IPv4", 00:12:00.471 "traddr": "10.0.0.1", 00:12:00.471 "trsvcid": "56064" 00:12:00.471 }, 00:12:00.471 "auth": { 00:12:00.471 "state": "completed", 00:12:00.471 "digest": "sha256", 00:12:00.471 "dhgroup": "ffdhe8192" 00:12:00.471 } 00:12:00.471 } 00:12:00.471 ]' 00:12:00.471 06:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:00.471 06:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:00.471 06:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:00.471 06:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:00.471 06:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:00.471 06:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:00.471 06:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:00.471 06:04:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:00.730 06:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzRhZDI4NDFiZTY3NDI4YWVhYjE3NDMwZTFmNmE5YzcyYWYyODFmNDFmNjg0ZDBhKIyDNQ==: --dhchap-ctrl-secret DHHC-1:03:M2MzMzdkN2NhMzU0OWYxMjJhZjI5MWZhMjQzNjRjNTlhZTI5MjA3NzJhY2E5YjBkYWQ5OWFiODc2ZDVhNDAzMy2Hyy4=: 00:12:00.730 06:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 --hostid a979a798-a221-4879-b3c4-5aaa753fde06 -l 0 --dhchap-secret DHHC-1:00:MzRhZDI4NDFiZTY3NDI4YWVhYjE3NDMwZTFmNmE5YzcyYWYyODFmNDFmNjg0ZDBhKIyDNQ==: --dhchap-ctrl-secret DHHC-1:03:M2MzMzdkN2NhMzU0OWYxMjJhZjI5MWZhMjQzNjRjNTlhZTI5MjA3NzJhY2E5YjBkYWQ5OWFiODc2ZDVhNDAzMy2Hyy4=: 00:12:01.298 06:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:01.298 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:01.298 06:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 00:12:01.298 06:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.298 06:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:01.298 06:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.298 06:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:01.298 06:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:12:01.298 06:04:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:12:01.558 06:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:12:01.558 06:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:01.558 06:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:01.558 06:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:12:01.558 06:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:01.558 06:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:01.558 06:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:01.558 06:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.558 06:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:01.558 06:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.558 06:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:01.558 06:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:01.558 06:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:02.126 00:12:02.126 06:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:02.126 06:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:02.126 06:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:02.385 06:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:02.385 06:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:02.386 06:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.386 06:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:02.386 06:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.386 06:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:02.386 { 00:12:02.386 "cntlid": 43, 00:12:02.386 "qid": 0, 00:12:02.386 "state": "enabled", 00:12:02.386 "thread": "nvmf_tgt_poll_group_000", 00:12:02.386 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06", 00:12:02.386 "listen_address": { 00:12:02.386 "trtype": "TCP", 00:12:02.386 "adrfam": "IPv4", 00:12:02.386 "traddr": "10.0.0.3", 00:12:02.386 "trsvcid": "4420" 00:12:02.386 }, 00:12:02.386 "peer_address": { 00:12:02.386 "trtype": "TCP", 00:12:02.386 "adrfam": "IPv4", 00:12:02.386 "traddr": "10.0.0.1", 00:12:02.386 "trsvcid": "56104" 00:12:02.386 }, 00:12:02.386 "auth": { 00:12:02.386 "state": "completed", 00:12:02.386 "digest": "sha256", 00:12:02.386 "dhgroup": "ffdhe8192" 00:12:02.386 } 00:12:02.386 } 00:12:02.386 ]' 00:12:02.386 06:04:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:02.646 06:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:02.646 06:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:02.646 06:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:02.646 06:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:02.646 06:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:02.646 06:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:02.646 06:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:02.905 06:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MDQxNWY0NjIxNmZmYWNiODg3MWU4MWY0NDI1ZDEyNWMSLUvV: --dhchap-ctrl-secret DHHC-1:02:OTJiMzg0Y2FiZWJhMWQ0NmNjMzQ2ZDJhMmRmZGRhZTAzMjQ2ZmFhZWRhMDUxZjFhuFjN0Q==: 00:12:02.905 06:04:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 --hostid a979a798-a221-4879-b3c4-5aaa753fde06 -l 0 --dhchap-secret DHHC-1:01:MDQxNWY0NjIxNmZmYWNiODg3MWU4MWY0NDI1ZDEyNWMSLUvV: --dhchap-ctrl-secret DHHC-1:02:OTJiMzg0Y2FiZWJhMWQ0NmNjMzQ2ZDJhMmRmZGRhZTAzMjQ2ZmFhZWRhMDUxZjFhuFjN0Q==: 00:12:03.842 06:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:03.842 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:03.842 06:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 00:12:03.842 06:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.842 06:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:03.842 06:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.842 06:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:03.842 06:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:12:03.842 06:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:12:03.842 06:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:12:03.842 06:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:03.842 06:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:03.842 06:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:12:03.842 06:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:03.842 06:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:03.842 06:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:03.842 06:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.842 06:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:03.842 06:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.842 06:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:03.842 06:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:03.842 06:04:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:04.779 00:12:04.779 06:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:04.779 06:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:04.779 06:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:05.038 06:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:05.038 06:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:05.038 06:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:05.038 06:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:05.038 06:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:05.038 06:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:05.038 { 00:12:05.038 "cntlid": 45, 00:12:05.038 "qid": 0, 00:12:05.038 "state": "enabled", 00:12:05.038 "thread": "nvmf_tgt_poll_group_000", 00:12:05.038 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06", 00:12:05.038 "listen_address": { 00:12:05.038 "trtype": "TCP", 00:12:05.038 "adrfam": "IPv4", 00:12:05.038 "traddr": "10.0.0.3", 00:12:05.038 "trsvcid": "4420" 00:12:05.038 }, 00:12:05.038 "peer_address": { 00:12:05.038 "trtype": "TCP", 00:12:05.038 "adrfam": "IPv4", 00:12:05.038 "traddr": "10.0.0.1", 00:12:05.038 "trsvcid": "49196" 00:12:05.038 }, 00:12:05.038 "auth": { 00:12:05.038 "state": "completed", 00:12:05.038 "digest": "sha256", 00:12:05.038 "dhgroup": "ffdhe8192" 00:12:05.038 } 00:12:05.038 } 00:12:05.038 ]' 00:12:05.038 06:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:05.038 06:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:05.038 06:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:05.038 06:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:05.038 06:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:05.038 06:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:05.038 06:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:05.038 06:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:05.297 06:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ODNjNGFlNzJmMDc5Y2ViZDNhNGRhNTUyZTUyNzdhNmNiNjVjODY3Yzg0M2UxYjgwB9C67A==: --dhchap-ctrl-secret DHHC-1:01:MDBiN2FiMTE5ZGQ4MTViZjhhZDM3NDgxZDAwYzAyYjI0Vo/w: 00:12:05.297 06:04:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 --hostid a979a798-a221-4879-b3c4-5aaa753fde06 -l 0 --dhchap-secret DHHC-1:02:ODNjNGFlNzJmMDc5Y2ViZDNhNGRhNTUyZTUyNzdhNmNiNjVjODY3Yzg0M2UxYjgwB9C67A==: --dhchap-ctrl-secret DHHC-1:01:MDBiN2FiMTE5ZGQ4MTViZjhhZDM3NDgxZDAwYzAyYjI0Vo/w: 00:12:06.234 06:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:06.234 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:06.234 06:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 00:12:06.234 06:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.234 06:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:06.234 06:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.234 06:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:06.234 06:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:12:06.234 06:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:12:06.493 06:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:12:06.493 06:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:06.494 06:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:12:06.494 06:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:12:06.494 06:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:06.494 06:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:06.494 06:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 --dhchap-key key3 00:12:06.494 06:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.494 06:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:06.494 06:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.494 06:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:06.494 06:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:06.494 06:04:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:07.061 00:12:07.061 06:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:07.061 06:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:07.061 06:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:07.320 06:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:07.320 06:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:07.320 06:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.320 06:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:07.320 06:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.320 06:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:07.320 { 00:12:07.320 "cntlid": 47, 00:12:07.320 "qid": 0, 00:12:07.320 "state": "enabled", 00:12:07.320 "thread": "nvmf_tgt_poll_group_000", 00:12:07.320 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06", 00:12:07.320 "listen_address": { 00:12:07.320 "trtype": "TCP", 00:12:07.320 "adrfam": "IPv4", 00:12:07.320 "traddr": "10.0.0.3", 00:12:07.320 "trsvcid": "4420" 00:12:07.320 }, 00:12:07.320 "peer_address": { 00:12:07.320 "trtype": "TCP", 00:12:07.320 "adrfam": "IPv4", 00:12:07.320 "traddr": "10.0.0.1", 00:12:07.320 "trsvcid": "49236" 00:12:07.320 }, 00:12:07.320 "auth": { 00:12:07.320 "state": "completed", 00:12:07.320 "digest": "sha256", 00:12:07.320 "dhgroup": "ffdhe8192" 00:12:07.320 } 00:12:07.320 } 00:12:07.320 ]' 00:12:07.320 06:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:07.320 06:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:12:07.320 06:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:07.320 06:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:07.320 06:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:07.578 06:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:07.578 06:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:07.578 06:04:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:07.836 06:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjI4ZmRiM2QyMjVhY2RkMzZiYzllMmI0NDQ0OTFmZThiYmI1NmE0YjIxYmY2NmRmNWQyOWEwNzk5NWNiMjg0ZT7lCu4=: 00:12:07.836 06:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 --hostid a979a798-a221-4879-b3c4-5aaa753fde06 -l 0 --dhchap-secret DHHC-1:03:MjI4ZmRiM2QyMjVhY2RkMzZiYzllMmI0NDQ0OTFmZThiYmI1NmE0YjIxYmY2NmRmNWQyOWEwNzk5NWNiMjg0ZT7lCu4=: 00:12:08.402 06:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:08.402 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:08.402 06:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 00:12:08.402 06:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.402 06:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:08.402 06:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.402 06:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:12:08.402 06:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:08.402 06:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:08.402 06:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:12:08.402 06:04:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:12:08.661 06:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:12:08.661 06:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:08.661 06:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:08.661 06:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:12:08.661 06:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:08.661 06:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:08.661 06:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:08.661 06:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.661 06:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:08.661 06:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.662 06:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:08.662 06:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:08.662 06:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:08.920 00:12:08.920 06:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:08.920 06:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:08.920 06:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:09.178 06:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:09.178 06:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:09.178 06:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.178 06:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:09.178 06:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.178 06:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:09.178 { 00:12:09.178 "cntlid": 49, 00:12:09.178 "qid": 0, 00:12:09.178 "state": "enabled", 00:12:09.178 "thread": "nvmf_tgt_poll_group_000", 00:12:09.178 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06", 00:12:09.178 "listen_address": { 00:12:09.178 "trtype": "TCP", 00:12:09.178 "adrfam": "IPv4", 00:12:09.178 "traddr": "10.0.0.3", 00:12:09.178 "trsvcid": "4420" 00:12:09.178 }, 00:12:09.178 "peer_address": { 00:12:09.178 "trtype": "TCP", 00:12:09.178 "adrfam": "IPv4", 00:12:09.178 "traddr": "10.0.0.1", 00:12:09.178 "trsvcid": "49270" 00:12:09.178 }, 00:12:09.178 "auth": { 00:12:09.178 "state": "completed", 00:12:09.178 "digest": "sha384", 00:12:09.178 "dhgroup": "null" 00:12:09.178 } 00:12:09.178 } 00:12:09.178 ]' 00:12:09.178 06:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:09.178 06:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:09.178 06:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:09.437 06:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:12:09.437 06:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:09.437 06:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:09.437 06:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:09.437 06:04:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:09.696 06:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzRhZDI4NDFiZTY3NDI4YWVhYjE3NDMwZTFmNmE5YzcyYWYyODFmNDFmNjg0ZDBhKIyDNQ==: --dhchap-ctrl-secret DHHC-1:03:M2MzMzdkN2NhMzU0OWYxMjJhZjI5MWZhMjQzNjRjNTlhZTI5MjA3NzJhY2E5YjBkYWQ5OWFiODc2ZDVhNDAzMy2Hyy4=: 00:12:09.696 06:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 --hostid a979a798-a221-4879-b3c4-5aaa753fde06 -l 0 --dhchap-secret DHHC-1:00:MzRhZDI4NDFiZTY3NDI4YWVhYjE3NDMwZTFmNmE5YzcyYWYyODFmNDFmNjg0ZDBhKIyDNQ==: --dhchap-ctrl-secret DHHC-1:03:M2MzMzdkN2NhMzU0OWYxMjJhZjI5MWZhMjQzNjRjNTlhZTI5MjA3NzJhY2E5YjBkYWQ5OWFiODc2ZDVhNDAzMy2Hyy4=: 00:12:10.262 06:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:10.520 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:10.520 06:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 00:12:10.521 06:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.521 06:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:10.521 06:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.521 06:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:10.521 06:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:12:10.521 06:04:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:12:10.779 06:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:12:10.779 06:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:10.779 06:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:10.779 06:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:12:10.779 06:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:10.779 06:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:10.779 06:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:10.779 06:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.779 06:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:10.779 06:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.779 06:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:10.779 06:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:10.779 06:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:11.038 00:12:11.038 06:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:11.038 06:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:11.038 06:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:11.297 06:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:11.297 06:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:11.297 06:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.297 06:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:11.297 06:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.297 06:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:11.297 { 00:12:11.297 "cntlid": 51, 00:12:11.297 "qid": 0, 00:12:11.297 "state": "enabled", 00:12:11.297 "thread": "nvmf_tgt_poll_group_000", 00:12:11.297 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06", 00:12:11.297 "listen_address": { 00:12:11.297 "trtype": "TCP", 00:12:11.297 "adrfam": "IPv4", 00:12:11.297 "traddr": "10.0.0.3", 00:12:11.297 "trsvcid": "4420" 00:12:11.297 }, 00:12:11.297 "peer_address": { 00:12:11.297 "trtype": "TCP", 00:12:11.297 "adrfam": "IPv4", 00:12:11.297 "traddr": "10.0.0.1", 00:12:11.297 "trsvcid": "49300" 00:12:11.297 }, 00:12:11.297 "auth": { 00:12:11.297 "state": "completed", 00:12:11.297 "digest": "sha384", 00:12:11.297 "dhgroup": "null" 00:12:11.297 } 00:12:11.297 } 00:12:11.297 ]' 00:12:11.297 06:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:11.297 06:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:11.297 06:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:11.556 06:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:12:11.556 06:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:11.556 06:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:11.556 06:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:11.556 06:04:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:11.815 06:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MDQxNWY0NjIxNmZmYWNiODg3MWU4MWY0NDI1ZDEyNWMSLUvV: --dhchap-ctrl-secret DHHC-1:02:OTJiMzg0Y2FiZWJhMWQ0NmNjMzQ2ZDJhMmRmZGRhZTAzMjQ2ZmFhZWRhMDUxZjFhuFjN0Q==: 00:12:11.815 06:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 --hostid a979a798-a221-4879-b3c4-5aaa753fde06 -l 0 --dhchap-secret DHHC-1:01:MDQxNWY0NjIxNmZmYWNiODg3MWU4MWY0NDI1ZDEyNWMSLUvV: --dhchap-ctrl-secret DHHC-1:02:OTJiMzg0Y2FiZWJhMWQ0NmNjMzQ2ZDJhMmRmZGRhZTAzMjQ2ZmFhZWRhMDUxZjFhuFjN0Q==: 00:12:12.386 06:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:12.386 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:12.386 06:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 00:12:12.386 06:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.386 06:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:12.386 06:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.386 06:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:12.386 06:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:12:12.386 06:04:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:12:12.645 06:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:12:12.645 06:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:12.645 06:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:12.645 06:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:12:12.645 06:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:12.645 06:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:12.645 06:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:12.645 06:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.645 06:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:12.645 06:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.645 06:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:12.645 06:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:12.645 06:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:13.212 00:12:13.212 06:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:13.212 06:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:13.212 06:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:13.470 06:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:13.470 06:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:13.470 06:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:13.470 06:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:13.470 06:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:13.470 06:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:13.470 { 00:12:13.470 "cntlid": 53, 00:12:13.470 "qid": 0, 00:12:13.470 "state": "enabled", 00:12:13.470 "thread": "nvmf_tgt_poll_group_000", 00:12:13.470 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06", 00:12:13.470 "listen_address": { 00:12:13.470 "trtype": "TCP", 00:12:13.470 "adrfam": "IPv4", 00:12:13.470 "traddr": "10.0.0.3", 00:12:13.470 "trsvcid": "4420" 00:12:13.470 }, 00:12:13.470 "peer_address": { 00:12:13.470 "trtype": "TCP", 00:12:13.470 "adrfam": "IPv4", 00:12:13.470 "traddr": "10.0.0.1", 00:12:13.470 "trsvcid": "58004" 00:12:13.470 }, 00:12:13.470 "auth": { 00:12:13.470 "state": "completed", 00:12:13.470 "digest": "sha384", 00:12:13.470 "dhgroup": "null" 00:12:13.470 } 00:12:13.470 } 00:12:13.470 ]' 00:12:13.470 06:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:13.470 06:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:13.470 06:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:13.470 06:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:12:13.470 06:04:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:13.470 06:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:13.470 06:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:13.470 06:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:13.729 06:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ODNjNGFlNzJmMDc5Y2ViZDNhNGRhNTUyZTUyNzdhNmNiNjVjODY3Yzg0M2UxYjgwB9C67A==: --dhchap-ctrl-secret DHHC-1:01:MDBiN2FiMTE5ZGQ4MTViZjhhZDM3NDgxZDAwYzAyYjI0Vo/w: 00:12:13.729 06:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 --hostid a979a798-a221-4879-b3c4-5aaa753fde06 -l 0 --dhchap-secret DHHC-1:02:ODNjNGFlNzJmMDc5Y2ViZDNhNGRhNTUyZTUyNzdhNmNiNjVjODY3Yzg0M2UxYjgwB9C67A==: --dhchap-ctrl-secret DHHC-1:01:MDBiN2FiMTE5ZGQ4MTViZjhhZDM3NDgxZDAwYzAyYjI0Vo/w: 00:12:14.297 06:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:14.297 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:14.297 06:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 00:12:14.297 06:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.297 06:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:14.297 06:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.297 06:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:14.297 06:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:12:14.297 06:04:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:12:14.866 06:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:12:14.866 06:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:14.866 06:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:14.866 06:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:12:14.866 06:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:14.866 06:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:14.866 06:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 --dhchap-key key3 00:12:14.866 06:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.866 06:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:14.866 06:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.866 06:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:14.866 06:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:14.866 06:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:14.866 00:12:15.125 06:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:15.125 06:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:15.125 06:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:15.384 06:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:15.384 06:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:15.384 06:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.384 06:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:15.384 06:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.384 06:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:15.384 { 00:12:15.384 "cntlid": 55, 00:12:15.384 "qid": 0, 00:12:15.384 "state": "enabled", 00:12:15.384 "thread": "nvmf_tgt_poll_group_000", 00:12:15.384 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06", 00:12:15.384 "listen_address": { 00:12:15.384 "trtype": "TCP", 00:12:15.384 "adrfam": "IPv4", 00:12:15.384 "traddr": "10.0.0.3", 00:12:15.384 "trsvcid": "4420" 00:12:15.384 }, 00:12:15.384 "peer_address": { 00:12:15.384 "trtype": "TCP", 00:12:15.384 "adrfam": "IPv4", 00:12:15.384 "traddr": "10.0.0.1", 00:12:15.384 "trsvcid": "58028" 00:12:15.384 }, 00:12:15.384 "auth": { 00:12:15.384 "state": "completed", 00:12:15.384 "digest": "sha384", 00:12:15.384 "dhgroup": "null" 00:12:15.384 } 00:12:15.384 } 00:12:15.384 ]' 00:12:15.384 06:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:15.384 06:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:15.384 06:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:15.384 06:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:12:15.384 06:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:15.384 06:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:15.384 06:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:15.384 06:04:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:15.641 06:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjI4ZmRiM2QyMjVhY2RkMzZiYzllMmI0NDQ0OTFmZThiYmI1NmE0YjIxYmY2NmRmNWQyOWEwNzk5NWNiMjg0ZT7lCu4=: 00:12:15.642 06:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 --hostid a979a798-a221-4879-b3c4-5aaa753fde06 -l 0 --dhchap-secret DHHC-1:03:MjI4ZmRiM2QyMjVhY2RkMzZiYzllMmI0NDQ0OTFmZThiYmI1NmE0YjIxYmY2NmRmNWQyOWEwNzk5NWNiMjg0ZT7lCu4=: 00:12:16.209 06:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:16.209 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:16.209 06:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 00:12:16.209 06:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.209 06:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:16.209 06:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.209 06:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:16.209 06:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:16.209 06:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:12:16.209 06:04:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:12:16.776 06:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:12:16.776 06:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:16.776 06:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:16.777 06:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:12:16.777 06:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:16.777 06:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:16.777 06:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:16.777 06:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.777 06:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:16.777 06:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.777 06:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:16.777 06:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:16.777 06:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:17.036 00:12:17.036 06:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:17.036 06:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:17.036 06:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:17.295 06:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:17.295 06:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:17.295 06:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.295 06:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:17.295 06:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.295 06:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:17.295 { 00:12:17.295 "cntlid": 57, 00:12:17.295 "qid": 0, 00:12:17.295 "state": "enabled", 00:12:17.295 "thread": "nvmf_tgt_poll_group_000", 00:12:17.295 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06", 00:12:17.295 "listen_address": { 00:12:17.295 "trtype": "TCP", 00:12:17.295 "adrfam": "IPv4", 00:12:17.295 "traddr": "10.0.0.3", 00:12:17.295 "trsvcid": "4420" 00:12:17.295 }, 00:12:17.295 "peer_address": { 00:12:17.295 "trtype": "TCP", 00:12:17.295 "adrfam": "IPv4", 00:12:17.295 "traddr": "10.0.0.1", 00:12:17.295 "trsvcid": "58046" 00:12:17.295 }, 00:12:17.295 "auth": { 00:12:17.295 "state": "completed", 00:12:17.295 "digest": "sha384", 00:12:17.295 "dhgroup": "ffdhe2048" 00:12:17.295 } 00:12:17.295 } 00:12:17.295 ]' 00:12:17.295 06:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:17.295 06:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:17.295 06:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:17.295 06:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:17.295 06:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:17.555 06:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:17.555 06:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:17.555 06:04:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:17.811 06:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzRhZDI4NDFiZTY3NDI4YWVhYjE3NDMwZTFmNmE5YzcyYWYyODFmNDFmNjg0ZDBhKIyDNQ==: --dhchap-ctrl-secret DHHC-1:03:M2MzMzdkN2NhMzU0OWYxMjJhZjI5MWZhMjQzNjRjNTlhZTI5MjA3NzJhY2E5YjBkYWQ5OWFiODc2ZDVhNDAzMy2Hyy4=: 00:12:17.811 06:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 --hostid a979a798-a221-4879-b3c4-5aaa753fde06 -l 0 --dhchap-secret DHHC-1:00:MzRhZDI4NDFiZTY3NDI4YWVhYjE3NDMwZTFmNmE5YzcyYWYyODFmNDFmNjg0ZDBhKIyDNQ==: --dhchap-ctrl-secret DHHC-1:03:M2MzMzdkN2NhMzU0OWYxMjJhZjI5MWZhMjQzNjRjNTlhZTI5MjA3NzJhY2E5YjBkYWQ5OWFiODc2ZDVhNDAzMy2Hyy4=: 00:12:18.378 06:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:18.378 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:18.378 06:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 00:12:18.378 06:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.378 06:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:18.378 06:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.378 06:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:18.378 06:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:12:18.378 06:04:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:12:18.637 06:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:12:18.637 06:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:18.637 06:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:18.637 06:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:12:18.637 06:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:18.637 06:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:18.637 06:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:18.637 06:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:18.637 06:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:18.637 06:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:18.637 06:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:18.637 06:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:18.637 06:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:18.894 00:12:18.894 06:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:18.894 06:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:18.894 06:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:19.461 06:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:19.461 06:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:19.461 06:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.461 06:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:19.461 06:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.461 06:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:19.461 { 00:12:19.461 "cntlid": 59, 00:12:19.461 "qid": 0, 00:12:19.461 "state": "enabled", 00:12:19.461 "thread": "nvmf_tgt_poll_group_000", 00:12:19.461 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06", 00:12:19.461 "listen_address": { 00:12:19.461 "trtype": "TCP", 00:12:19.461 "adrfam": "IPv4", 00:12:19.461 "traddr": "10.0.0.3", 00:12:19.461 "trsvcid": "4420" 00:12:19.461 }, 00:12:19.461 "peer_address": { 00:12:19.461 "trtype": "TCP", 00:12:19.461 "adrfam": "IPv4", 00:12:19.461 "traddr": "10.0.0.1", 00:12:19.461 "trsvcid": "58080" 00:12:19.461 }, 00:12:19.461 "auth": { 00:12:19.461 "state": "completed", 00:12:19.461 "digest": "sha384", 00:12:19.461 "dhgroup": "ffdhe2048" 00:12:19.461 } 00:12:19.461 } 00:12:19.461 ]' 00:12:19.461 06:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:19.461 06:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:19.461 06:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:19.461 06:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:19.461 06:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:19.461 06:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:19.461 06:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:19.461 06:04:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:19.720 06:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MDQxNWY0NjIxNmZmYWNiODg3MWU4MWY0NDI1ZDEyNWMSLUvV: --dhchap-ctrl-secret DHHC-1:02:OTJiMzg0Y2FiZWJhMWQ0NmNjMzQ2ZDJhMmRmZGRhZTAzMjQ2ZmFhZWRhMDUxZjFhuFjN0Q==: 00:12:19.720 06:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 --hostid a979a798-a221-4879-b3c4-5aaa753fde06 -l 0 --dhchap-secret DHHC-1:01:MDQxNWY0NjIxNmZmYWNiODg3MWU4MWY0NDI1ZDEyNWMSLUvV: --dhchap-ctrl-secret DHHC-1:02:OTJiMzg0Y2FiZWJhMWQ0NmNjMzQ2ZDJhMmRmZGRhZTAzMjQ2ZmFhZWRhMDUxZjFhuFjN0Q==: 00:12:20.657 06:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:20.657 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:20.658 06:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 00:12:20.658 06:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.658 06:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:20.658 06:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.658 06:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:20.658 06:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:12:20.658 06:04:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:12:20.916 06:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:12:20.916 06:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:20.916 06:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:20.916 06:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:12:20.916 06:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:20.916 06:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:20.916 06:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:20.916 06:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.916 06:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:20.916 06:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.916 06:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:20.916 06:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:20.916 06:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:21.174 00:12:21.174 06:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:21.174 06:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:21.174 06:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:21.433 06:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:21.433 06:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:21.433 06:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.433 06:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:21.433 06:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.433 06:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:21.433 { 00:12:21.433 "cntlid": 61, 00:12:21.433 "qid": 0, 00:12:21.433 "state": "enabled", 00:12:21.433 "thread": "nvmf_tgt_poll_group_000", 00:12:21.433 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06", 00:12:21.433 "listen_address": { 00:12:21.433 "trtype": "TCP", 00:12:21.433 "adrfam": "IPv4", 00:12:21.433 "traddr": "10.0.0.3", 00:12:21.433 "trsvcid": "4420" 00:12:21.433 }, 00:12:21.433 "peer_address": { 00:12:21.433 "trtype": "TCP", 00:12:21.433 "adrfam": "IPv4", 00:12:21.433 "traddr": "10.0.0.1", 00:12:21.433 "trsvcid": "58102" 00:12:21.433 }, 00:12:21.433 "auth": { 00:12:21.433 "state": "completed", 00:12:21.433 "digest": "sha384", 00:12:21.433 "dhgroup": "ffdhe2048" 00:12:21.433 } 00:12:21.433 } 00:12:21.433 ]' 00:12:21.433 06:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:21.433 06:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:21.433 06:04:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:21.433 06:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:21.433 06:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:21.693 06:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:21.693 06:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:21.693 06:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:21.952 06:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ODNjNGFlNzJmMDc5Y2ViZDNhNGRhNTUyZTUyNzdhNmNiNjVjODY3Yzg0M2UxYjgwB9C67A==: --dhchap-ctrl-secret DHHC-1:01:MDBiN2FiMTE5ZGQ4MTViZjhhZDM3NDgxZDAwYzAyYjI0Vo/w: 00:12:21.952 06:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 --hostid a979a798-a221-4879-b3c4-5aaa753fde06 -l 0 --dhchap-secret DHHC-1:02:ODNjNGFlNzJmMDc5Y2ViZDNhNGRhNTUyZTUyNzdhNmNiNjVjODY3Yzg0M2UxYjgwB9C67A==: --dhchap-ctrl-secret DHHC-1:01:MDBiN2FiMTE5ZGQ4MTViZjhhZDM3NDgxZDAwYzAyYjI0Vo/w: 00:12:22.521 06:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:22.521 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:22.521 06:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 00:12:22.521 06:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.521 06:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:22.521 06:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.521 06:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:22.521 06:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:12:22.521 06:04:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:12:22.780 06:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:12:22.780 06:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:22.780 06:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:22.780 06:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:12:22.780 06:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:22.780 06:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:22.780 06:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 --dhchap-key key3 00:12:22.780 06:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.780 06:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:22.780 06:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.780 06:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:22.780 06:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:22.780 06:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:23.040 00:12:23.040 06:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:23.040 06:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:23.040 06:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:23.299 06:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:23.299 06:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:23.299 06:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.299 06:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:23.299 06:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.299 06:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:23.299 { 00:12:23.299 "cntlid": 63, 00:12:23.299 "qid": 0, 00:12:23.299 "state": "enabled", 00:12:23.299 "thread": "nvmf_tgt_poll_group_000", 00:12:23.299 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06", 00:12:23.299 "listen_address": { 00:12:23.299 "trtype": "TCP", 00:12:23.299 "adrfam": "IPv4", 00:12:23.299 "traddr": "10.0.0.3", 00:12:23.299 "trsvcid": "4420" 00:12:23.299 }, 00:12:23.299 "peer_address": { 00:12:23.299 "trtype": "TCP", 00:12:23.299 "adrfam": "IPv4", 00:12:23.299 "traddr": "10.0.0.1", 00:12:23.299 "trsvcid": "53402" 00:12:23.299 }, 00:12:23.299 "auth": { 00:12:23.299 "state": "completed", 00:12:23.299 "digest": "sha384", 00:12:23.299 "dhgroup": "ffdhe2048" 00:12:23.299 } 00:12:23.299 } 00:12:23.299 ]' 00:12:23.299 06:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:23.299 06:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:23.299 06:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:23.300 06:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:12:23.300 06:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:23.561 06:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:23.561 06:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:23.561 06:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:23.826 06:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjI4ZmRiM2QyMjVhY2RkMzZiYzllMmI0NDQ0OTFmZThiYmI1NmE0YjIxYmY2NmRmNWQyOWEwNzk5NWNiMjg0ZT7lCu4=: 00:12:23.826 06:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 --hostid a979a798-a221-4879-b3c4-5aaa753fde06 -l 0 --dhchap-secret DHHC-1:03:MjI4ZmRiM2QyMjVhY2RkMzZiYzllMmI0NDQ0OTFmZThiYmI1NmE0YjIxYmY2NmRmNWQyOWEwNzk5NWNiMjg0ZT7lCu4=: 00:12:24.395 06:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:24.395 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:24.396 06:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 00:12:24.396 06:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.396 06:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:24.396 06:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.396 06:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:24.396 06:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:24.396 06:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:24.396 06:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:24.655 06:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:12:24.655 06:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:24.655 06:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:24.655 06:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:12:24.655 06:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:24.655 06:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:24.655 06:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:24.655 06:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.655 06:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:24.655 06:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.655 06:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:24.655 06:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:24.655 06:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:25.224 00:12:25.224 06:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:25.224 06:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:25.224 06:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:25.483 06:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:25.483 06:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:25.483 06:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.483 06:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:25.483 06:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.483 06:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:25.483 { 00:12:25.483 "cntlid": 65, 00:12:25.483 "qid": 0, 00:12:25.483 "state": "enabled", 00:12:25.483 "thread": "nvmf_tgt_poll_group_000", 00:12:25.483 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06", 00:12:25.483 "listen_address": { 00:12:25.483 "trtype": "TCP", 00:12:25.483 "adrfam": "IPv4", 00:12:25.483 "traddr": "10.0.0.3", 00:12:25.483 "trsvcid": "4420" 00:12:25.483 }, 00:12:25.483 "peer_address": { 00:12:25.483 "trtype": "TCP", 00:12:25.483 "adrfam": "IPv4", 00:12:25.483 "traddr": "10.0.0.1", 00:12:25.483 "trsvcid": "53434" 00:12:25.483 }, 00:12:25.483 "auth": { 00:12:25.483 "state": "completed", 00:12:25.483 "digest": "sha384", 00:12:25.483 "dhgroup": "ffdhe3072" 00:12:25.483 } 00:12:25.483 } 00:12:25.483 ]' 00:12:25.483 06:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:25.483 06:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:25.483 06:04:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:25.483 06:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:25.483 06:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:25.483 06:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:25.483 06:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:25.484 06:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:25.743 06:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzRhZDI4NDFiZTY3NDI4YWVhYjE3NDMwZTFmNmE5YzcyYWYyODFmNDFmNjg0ZDBhKIyDNQ==: --dhchap-ctrl-secret DHHC-1:03:M2MzMzdkN2NhMzU0OWYxMjJhZjI5MWZhMjQzNjRjNTlhZTI5MjA3NzJhY2E5YjBkYWQ5OWFiODc2ZDVhNDAzMy2Hyy4=: 00:12:25.743 06:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 --hostid a979a798-a221-4879-b3c4-5aaa753fde06 -l 0 --dhchap-secret DHHC-1:00:MzRhZDI4NDFiZTY3NDI4YWVhYjE3NDMwZTFmNmE5YzcyYWYyODFmNDFmNjg0ZDBhKIyDNQ==: --dhchap-ctrl-secret DHHC-1:03:M2MzMzdkN2NhMzU0OWYxMjJhZjI5MWZhMjQzNjRjNTlhZTI5MjA3NzJhY2E5YjBkYWQ5OWFiODc2ZDVhNDAzMy2Hyy4=: 00:12:26.680 06:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:26.680 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:26.680 06:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 00:12:26.680 06:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.680 06:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:26.680 06:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.680 06:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:26.680 06:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:26.680 06:04:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:26.680 06:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:12:26.680 06:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:26.680 06:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:26.680 06:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:12:26.681 06:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:26.681 06:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:26.681 06:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:26.681 06:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.681 06:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:26.681 06:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.681 06:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:26.681 06:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:26.681 06:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:27.249 00:12:27.249 06:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:27.249 06:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:27.249 06:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:27.508 06:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:27.508 06:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:27.508 06:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.508 06:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:27.508 06:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.508 06:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:27.508 { 00:12:27.508 "cntlid": 67, 00:12:27.508 "qid": 0, 00:12:27.508 "state": "enabled", 00:12:27.508 "thread": "nvmf_tgt_poll_group_000", 00:12:27.508 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06", 00:12:27.508 "listen_address": { 00:12:27.508 "trtype": "TCP", 00:12:27.508 "adrfam": "IPv4", 00:12:27.508 "traddr": "10.0.0.3", 00:12:27.508 "trsvcid": "4420" 00:12:27.508 }, 00:12:27.508 "peer_address": { 00:12:27.508 "trtype": "TCP", 00:12:27.508 "adrfam": "IPv4", 00:12:27.508 "traddr": "10.0.0.1", 00:12:27.508 "trsvcid": "53454" 00:12:27.508 }, 00:12:27.508 "auth": { 00:12:27.508 "state": "completed", 00:12:27.508 "digest": "sha384", 00:12:27.508 "dhgroup": "ffdhe3072" 00:12:27.508 } 00:12:27.508 } 00:12:27.508 ]' 00:12:27.508 06:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:27.508 06:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:27.508 06:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:27.508 06:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:27.508 06:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:27.508 06:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:27.508 06:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:27.508 06:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:27.767 06:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MDQxNWY0NjIxNmZmYWNiODg3MWU4MWY0NDI1ZDEyNWMSLUvV: --dhchap-ctrl-secret DHHC-1:02:OTJiMzg0Y2FiZWJhMWQ0NmNjMzQ2ZDJhMmRmZGRhZTAzMjQ2ZmFhZWRhMDUxZjFhuFjN0Q==: 00:12:27.767 06:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 --hostid a979a798-a221-4879-b3c4-5aaa753fde06 -l 0 --dhchap-secret DHHC-1:01:MDQxNWY0NjIxNmZmYWNiODg3MWU4MWY0NDI1ZDEyNWMSLUvV: --dhchap-ctrl-secret DHHC-1:02:OTJiMzg0Y2FiZWJhMWQ0NmNjMzQ2ZDJhMmRmZGRhZTAzMjQ2ZmFhZWRhMDUxZjFhuFjN0Q==: 00:12:28.705 06:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:28.705 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:28.705 06:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 00:12:28.705 06:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.705 06:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:28.705 06:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.705 06:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:28.705 06:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:28.705 06:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:28.965 06:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:12:28.965 06:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:28.965 06:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:28.965 06:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:12:28.965 06:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:28.965 06:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:28.965 06:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:28.965 06:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.965 06:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:28.965 06:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.965 06:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:28.965 06:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:28.965 06:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:29.226 00:12:29.226 06:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:29.226 06:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:29.226 06:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:29.485 06:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:29.485 06:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:29.485 06:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.485 06:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:29.485 06:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.485 06:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:29.485 { 00:12:29.485 "cntlid": 69, 00:12:29.485 "qid": 0, 00:12:29.485 "state": "enabled", 00:12:29.485 "thread": "nvmf_tgt_poll_group_000", 00:12:29.485 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06", 00:12:29.485 "listen_address": { 00:12:29.485 "trtype": "TCP", 00:12:29.485 "adrfam": "IPv4", 00:12:29.485 "traddr": "10.0.0.3", 00:12:29.485 "trsvcid": "4420" 00:12:29.485 }, 00:12:29.485 "peer_address": { 00:12:29.485 "trtype": "TCP", 00:12:29.485 "adrfam": "IPv4", 00:12:29.485 "traddr": "10.0.0.1", 00:12:29.485 "trsvcid": "53486" 00:12:29.485 }, 00:12:29.485 "auth": { 00:12:29.485 "state": "completed", 00:12:29.485 "digest": "sha384", 00:12:29.485 "dhgroup": "ffdhe3072" 00:12:29.485 } 00:12:29.485 } 00:12:29.485 ]' 00:12:29.485 06:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:29.485 06:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:29.485 06:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:29.485 06:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:29.485 06:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:29.745 06:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:29.745 06:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:29.745 06:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:29.746 06:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ODNjNGFlNzJmMDc5Y2ViZDNhNGRhNTUyZTUyNzdhNmNiNjVjODY3Yzg0M2UxYjgwB9C67A==: --dhchap-ctrl-secret DHHC-1:01:MDBiN2FiMTE5ZGQ4MTViZjhhZDM3NDgxZDAwYzAyYjI0Vo/w: 00:12:29.746 06:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 --hostid a979a798-a221-4879-b3c4-5aaa753fde06 -l 0 --dhchap-secret DHHC-1:02:ODNjNGFlNzJmMDc5Y2ViZDNhNGRhNTUyZTUyNzdhNmNiNjVjODY3Yzg0M2UxYjgwB9C67A==: --dhchap-ctrl-secret DHHC-1:01:MDBiN2FiMTE5ZGQ4MTViZjhhZDM3NDgxZDAwYzAyYjI0Vo/w: 00:12:30.313 06:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:30.573 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:30.573 06:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 00:12:30.573 06:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.573 06:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:30.573 06:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.573 06:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:30.573 06:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:30.573 06:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:12:30.833 06:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:12:30.833 06:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:30.833 06:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:30.833 06:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:12:30.833 06:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:30.833 06:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:30.833 06:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 --dhchap-key key3 00:12:30.833 06:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.833 06:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:30.833 06:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.833 06:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:30.833 06:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:30.833 06:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:31.092 00:12:31.092 06:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:31.092 06:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:31.092 06:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:31.352 06:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:31.352 06:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:31.352 06:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.352 06:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:31.352 06:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.352 06:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:31.352 { 00:12:31.352 "cntlid": 71, 00:12:31.352 "qid": 0, 00:12:31.352 "state": "enabled", 00:12:31.352 "thread": "nvmf_tgt_poll_group_000", 00:12:31.352 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06", 00:12:31.352 "listen_address": { 00:12:31.352 "trtype": "TCP", 00:12:31.352 "adrfam": "IPv4", 00:12:31.352 "traddr": "10.0.0.3", 00:12:31.352 "trsvcid": "4420" 00:12:31.352 }, 00:12:31.352 "peer_address": { 00:12:31.352 "trtype": "TCP", 00:12:31.352 "adrfam": "IPv4", 00:12:31.352 "traddr": "10.0.0.1", 00:12:31.352 "trsvcid": "53500" 00:12:31.352 }, 00:12:31.352 "auth": { 00:12:31.352 "state": "completed", 00:12:31.352 "digest": "sha384", 00:12:31.352 "dhgroup": "ffdhe3072" 00:12:31.352 } 00:12:31.352 } 00:12:31.352 ]' 00:12:31.352 06:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:31.352 06:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:31.352 06:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:31.352 06:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:12:31.352 06:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:31.611 06:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:31.611 06:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:31.611 06:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:31.870 06:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjI4ZmRiM2QyMjVhY2RkMzZiYzllMmI0NDQ0OTFmZThiYmI1NmE0YjIxYmY2NmRmNWQyOWEwNzk5NWNiMjg0ZT7lCu4=: 00:12:31.870 06:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 --hostid a979a798-a221-4879-b3c4-5aaa753fde06 -l 0 --dhchap-secret DHHC-1:03:MjI4ZmRiM2QyMjVhY2RkMzZiYzllMmI0NDQ0OTFmZThiYmI1NmE0YjIxYmY2NmRmNWQyOWEwNzk5NWNiMjg0ZT7lCu4=: 00:12:32.440 06:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:32.440 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:32.440 06:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 00:12:32.440 06:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.440 06:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:32.440 06:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.440 06:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:32.440 06:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:32.440 06:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:32.440 06:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:32.700 06:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:12:32.700 06:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:32.700 06:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:32.700 06:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:12:32.700 06:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:32.700 06:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:32.700 06:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:32.700 06:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.700 06:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:32.700 06:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.700 06:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:32.700 06:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:32.701 06:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:32.958 00:12:32.958 06:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:32.958 06:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:32.958 06:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:33.530 06:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:33.530 06:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:33.530 06:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:33.530 06:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:33.530 06:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:33.530 06:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:33.530 { 00:12:33.531 "cntlid": 73, 00:12:33.531 "qid": 0, 00:12:33.531 "state": "enabled", 00:12:33.531 "thread": "nvmf_tgt_poll_group_000", 00:12:33.531 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06", 00:12:33.531 "listen_address": { 00:12:33.531 "trtype": "TCP", 00:12:33.531 "adrfam": "IPv4", 00:12:33.531 "traddr": "10.0.0.3", 00:12:33.531 "trsvcid": "4420" 00:12:33.531 }, 00:12:33.531 "peer_address": { 00:12:33.531 "trtype": "TCP", 00:12:33.531 "adrfam": "IPv4", 00:12:33.531 "traddr": "10.0.0.1", 00:12:33.531 "trsvcid": "53534" 00:12:33.531 }, 00:12:33.531 "auth": { 00:12:33.531 "state": "completed", 00:12:33.531 "digest": "sha384", 00:12:33.531 "dhgroup": "ffdhe4096" 00:12:33.531 } 00:12:33.531 } 00:12:33.531 ]' 00:12:33.531 06:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:33.531 06:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:33.531 06:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:33.531 06:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:33.531 06:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:33.531 06:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:33.531 06:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:33.531 06:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:33.791 06:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzRhZDI4NDFiZTY3NDI4YWVhYjE3NDMwZTFmNmE5YzcyYWYyODFmNDFmNjg0ZDBhKIyDNQ==: --dhchap-ctrl-secret DHHC-1:03:M2MzMzdkN2NhMzU0OWYxMjJhZjI5MWZhMjQzNjRjNTlhZTI5MjA3NzJhY2E5YjBkYWQ5OWFiODc2ZDVhNDAzMy2Hyy4=: 00:12:33.791 06:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 --hostid a979a798-a221-4879-b3c4-5aaa753fde06 -l 0 --dhchap-secret DHHC-1:00:MzRhZDI4NDFiZTY3NDI4YWVhYjE3NDMwZTFmNmE5YzcyYWYyODFmNDFmNjg0ZDBhKIyDNQ==: --dhchap-ctrl-secret DHHC-1:03:M2MzMzdkN2NhMzU0OWYxMjJhZjI5MWZhMjQzNjRjNTlhZTI5MjA3NzJhY2E5YjBkYWQ5OWFiODc2ZDVhNDAzMy2Hyy4=: 00:12:34.357 06:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:34.357 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:34.357 06:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 00:12:34.357 06:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.357 06:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:34.357 06:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.357 06:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:34.357 06:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:34.357 06:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:34.616 06:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:12:34.616 06:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:34.616 06:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:34.616 06:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:12:34.616 06:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:34.616 06:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:34.616 06:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:34.616 06:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.616 06:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:34.616 06:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.616 06:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:34.616 06:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:34.616 06:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:35.182 00:12:35.182 06:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:35.182 06:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:35.182 06:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:35.182 06:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:35.182 06:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:35.440 06:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.440 06:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:35.440 06:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.440 06:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:35.440 { 00:12:35.440 "cntlid": 75, 00:12:35.440 "qid": 0, 00:12:35.440 "state": "enabled", 00:12:35.440 "thread": "nvmf_tgt_poll_group_000", 00:12:35.440 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06", 00:12:35.440 "listen_address": { 00:12:35.440 "trtype": "TCP", 00:12:35.440 "adrfam": "IPv4", 00:12:35.440 "traddr": "10.0.0.3", 00:12:35.440 "trsvcid": "4420" 00:12:35.440 }, 00:12:35.440 "peer_address": { 00:12:35.440 "trtype": "TCP", 00:12:35.440 "adrfam": "IPv4", 00:12:35.440 "traddr": "10.0.0.1", 00:12:35.440 "trsvcid": "34290" 00:12:35.440 }, 00:12:35.440 "auth": { 00:12:35.440 "state": "completed", 00:12:35.440 "digest": "sha384", 00:12:35.440 "dhgroup": "ffdhe4096" 00:12:35.440 } 00:12:35.440 } 00:12:35.440 ]' 00:12:35.440 06:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:35.440 06:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:35.440 06:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:35.440 06:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:35.440 06:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:35.440 06:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:35.440 06:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:35.440 06:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:35.705 06:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MDQxNWY0NjIxNmZmYWNiODg3MWU4MWY0NDI1ZDEyNWMSLUvV: --dhchap-ctrl-secret DHHC-1:02:OTJiMzg0Y2FiZWJhMWQ0NmNjMzQ2ZDJhMmRmZGRhZTAzMjQ2ZmFhZWRhMDUxZjFhuFjN0Q==: 00:12:35.705 06:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 --hostid a979a798-a221-4879-b3c4-5aaa753fde06 -l 0 --dhchap-secret DHHC-1:01:MDQxNWY0NjIxNmZmYWNiODg3MWU4MWY0NDI1ZDEyNWMSLUvV: --dhchap-ctrl-secret DHHC-1:02:OTJiMzg0Y2FiZWJhMWQ0NmNjMzQ2ZDJhMmRmZGRhZTAzMjQ2ZmFhZWRhMDUxZjFhuFjN0Q==: 00:12:36.333 06:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:36.333 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:36.333 06:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 00:12:36.333 06:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.333 06:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:36.333 06:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.333 06:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:36.333 06:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:36.333 06:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:36.591 06:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:12:36.591 06:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:36.591 06:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:36.591 06:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:12:36.591 06:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:36.591 06:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:36.591 06:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:36.591 06:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.591 06:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:36.591 06:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.591 06:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:36.591 06:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:36.591 06:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:37.166 00:12:37.166 06:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:37.166 06:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:37.166 06:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:37.424 06:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:37.424 06:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:37.424 06:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.424 06:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:37.424 06:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.424 06:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:37.424 { 00:12:37.424 "cntlid": 77, 00:12:37.424 "qid": 0, 00:12:37.424 "state": "enabled", 00:12:37.425 "thread": "nvmf_tgt_poll_group_000", 00:12:37.425 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06", 00:12:37.425 "listen_address": { 00:12:37.425 "trtype": "TCP", 00:12:37.425 "adrfam": "IPv4", 00:12:37.425 "traddr": "10.0.0.3", 00:12:37.425 "trsvcid": "4420" 00:12:37.425 }, 00:12:37.425 "peer_address": { 00:12:37.425 "trtype": "TCP", 00:12:37.425 "adrfam": "IPv4", 00:12:37.425 "traddr": "10.0.0.1", 00:12:37.425 "trsvcid": "34312" 00:12:37.425 }, 00:12:37.425 "auth": { 00:12:37.425 "state": "completed", 00:12:37.425 "digest": "sha384", 00:12:37.425 "dhgroup": "ffdhe4096" 00:12:37.425 } 00:12:37.425 } 00:12:37.425 ]' 00:12:37.425 06:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:37.425 06:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:37.425 06:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:37.425 06:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:37.425 06:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:37.425 06:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:37.425 06:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:37.425 06:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:37.683 06:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ODNjNGFlNzJmMDc5Y2ViZDNhNGRhNTUyZTUyNzdhNmNiNjVjODY3Yzg0M2UxYjgwB9C67A==: --dhchap-ctrl-secret DHHC-1:01:MDBiN2FiMTE5ZGQ4MTViZjhhZDM3NDgxZDAwYzAyYjI0Vo/w: 00:12:37.683 06:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 --hostid a979a798-a221-4879-b3c4-5aaa753fde06 -l 0 --dhchap-secret DHHC-1:02:ODNjNGFlNzJmMDc5Y2ViZDNhNGRhNTUyZTUyNzdhNmNiNjVjODY3Yzg0M2UxYjgwB9C67A==: --dhchap-ctrl-secret DHHC-1:01:MDBiN2FiMTE5ZGQ4MTViZjhhZDM3NDgxZDAwYzAyYjI0Vo/w: 00:12:38.249 06:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:38.249 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:38.249 06:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 00:12:38.249 06:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.249 06:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:38.249 06:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.249 06:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:38.250 06:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:38.250 06:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:12:38.508 06:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:12:38.508 06:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:38.508 06:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:38.508 06:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:12:38.508 06:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:38.508 06:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:38.508 06:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 --dhchap-key key3 00:12:38.508 06:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.508 06:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:38.508 06:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.508 06:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:38.508 06:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:38.508 06:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:39.076 00:12:39.076 06:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:39.076 06:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:39.076 06:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:39.334 06:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:39.334 06:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:39.334 06:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.334 06:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:39.334 06:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.334 06:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:39.334 { 00:12:39.334 "cntlid": 79, 00:12:39.334 "qid": 0, 00:12:39.334 "state": "enabled", 00:12:39.334 "thread": "nvmf_tgt_poll_group_000", 00:12:39.335 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06", 00:12:39.335 "listen_address": { 00:12:39.335 "trtype": "TCP", 00:12:39.335 "adrfam": "IPv4", 00:12:39.335 "traddr": "10.0.0.3", 00:12:39.335 "trsvcid": "4420" 00:12:39.335 }, 00:12:39.335 "peer_address": { 00:12:39.335 "trtype": "TCP", 00:12:39.335 "adrfam": "IPv4", 00:12:39.335 "traddr": "10.0.0.1", 00:12:39.335 "trsvcid": "34350" 00:12:39.335 }, 00:12:39.335 "auth": { 00:12:39.335 "state": "completed", 00:12:39.335 "digest": "sha384", 00:12:39.335 "dhgroup": "ffdhe4096" 00:12:39.335 } 00:12:39.335 } 00:12:39.335 ]' 00:12:39.335 06:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:39.335 06:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:39.335 06:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:39.335 06:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:12:39.335 06:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:39.593 06:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:39.593 06:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:39.593 06:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:39.852 06:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjI4ZmRiM2QyMjVhY2RkMzZiYzllMmI0NDQ0OTFmZThiYmI1NmE0YjIxYmY2NmRmNWQyOWEwNzk5NWNiMjg0ZT7lCu4=: 00:12:39.852 06:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 --hostid a979a798-a221-4879-b3c4-5aaa753fde06 -l 0 --dhchap-secret DHHC-1:03:MjI4ZmRiM2QyMjVhY2RkMzZiYzllMmI0NDQ0OTFmZThiYmI1NmE0YjIxYmY2NmRmNWQyOWEwNzk5NWNiMjg0ZT7lCu4=: 00:12:40.420 06:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:40.420 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:40.420 06:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 00:12:40.420 06:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.420 06:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:40.420 06:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.420 06:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:40.420 06:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:40.420 06:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:12:40.420 06:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:12:40.679 06:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:12:40.679 06:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:40.679 06:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:40.679 06:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:12:40.679 06:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:40.679 06:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:40.679 06:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:40.679 06:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.680 06:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:40.680 06:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.680 06:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:40.680 06:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:40.680 06:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:40.938 00:12:40.938 06:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:40.938 06:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:40.938 06:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:41.507 06:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:41.507 06:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:41.507 06:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.507 06:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:41.507 06:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.507 06:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:41.507 { 00:12:41.507 "cntlid": 81, 00:12:41.507 "qid": 0, 00:12:41.507 "state": "enabled", 00:12:41.507 "thread": "nvmf_tgt_poll_group_000", 00:12:41.507 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06", 00:12:41.507 "listen_address": { 00:12:41.507 "trtype": "TCP", 00:12:41.507 "adrfam": "IPv4", 00:12:41.507 "traddr": "10.0.0.3", 00:12:41.507 "trsvcid": "4420" 00:12:41.507 }, 00:12:41.507 "peer_address": { 00:12:41.507 "trtype": "TCP", 00:12:41.507 "adrfam": "IPv4", 00:12:41.507 "traddr": "10.0.0.1", 00:12:41.507 "trsvcid": "34392" 00:12:41.507 }, 00:12:41.507 "auth": { 00:12:41.507 "state": "completed", 00:12:41.507 "digest": "sha384", 00:12:41.507 "dhgroup": "ffdhe6144" 00:12:41.507 } 00:12:41.507 } 00:12:41.507 ]' 00:12:41.507 06:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:41.507 06:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:41.507 06:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:41.507 06:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:41.507 06:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:41.507 06:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:41.507 06:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:41.507 06:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:41.767 06:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzRhZDI4NDFiZTY3NDI4YWVhYjE3NDMwZTFmNmE5YzcyYWYyODFmNDFmNjg0ZDBhKIyDNQ==: --dhchap-ctrl-secret DHHC-1:03:M2MzMzdkN2NhMzU0OWYxMjJhZjI5MWZhMjQzNjRjNTlhZTI5MjA3NzJhY2E5YjBkYWQ5OWFiODc2ZDVhNDAzMy2Hyy4=: 00:12:41.767 06:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 --hostid a979a798-a221-4879-b3c4-5aaa753fde06 -l 0 --dhchap-secret DHHC-1:00:MzRhZDI4NDFiZTY3NDI4YWVhYjE3NDMwZTFmNmE5YzcyYWYyODFmNDFmNjg0ZDBhKIyDNQ==: --dhchap-ctrl-secret DHHC-1:03:M2MzMzdkN2NhMzU0OWYxMjJhZjI5MWZhMjQzNjRjNTlhZTI5MjA3NzJhY2E5YjBkYWQ5OWFiODc2ZDVhNDAzMy2Hyy4=: 00:12:42.336 06:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:42.336 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:42.336 06:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 00:12:42.336 06:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.336 06:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:42.336 06:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.336 06:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:42.336 06:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:12:42.336 06:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:12:42.905 06:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:12:42.905 06:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:42.905 06:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:42.905 06:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:12:42.905 06:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:42.905 06:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:42.905 06:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:42.905 06:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.905 06:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:42.905 06:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.905 06:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:42.905 06:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:42.905 06:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:43.164 00:12:43.164 06:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:43.164 06:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:43.164 06:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:43.424 06:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:43.424 06:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:43.424 06:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.424 06:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:43.711 06:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.712 06:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:43.712 { 00:12:43.712 "cntlid": 83, 00:12:43.712 "qid": 0, 00:12:43.712 "state": "enabled", 00:12:43.712 "thread": "nvmf_tgt_poll_group_000", 00:12:43.712 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06", 00:12:43.712 "listen_address": { 00:12:43.712 "trtype": "TCP", 00:12:43.712 "adrfam": "IPv4", 00:12:43.712 "traddr": "10.0.0.3", 00:12:43.712 "trsvcid": "4420" 00:12:43.712 }, 00:12:43.712 "peer_address": { 00:12:43.712 "trtype": "TCP", 00:12:43.712 "adrfam": "IPv4", 00:12:43.712 "traddr": "10.0.0.1", 00:12:43.712 "trsvcid": "32886" 00:12:43.712 }, 00:12:43.712 "auth": { 00:12:43.712 "state": "completed", 00:12:43.712 "digest": "sha384", 00:12:43.712 "dhgroup": "ffdhe6144" 00:12:43.712 } 00:12:43.712 } 00:12:43.712 ]' 00:12:43.712 06:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:43.712 06:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:43.712 06:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:43.712 06:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:43.712 06:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:43.712 06:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:43.712 06:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:43.712 06:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:43.971 06:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MDQxNWY0NjIxNmZmYWNiODg3MWU4MWY0NDI1ZDEyNWMSLUvV: --dhchap-ctrl-secret DHHC-1:02:OTJiMzg0Y2FiZWJhMWQ0NmNjMzQ2ZDJhMmRmZGRhZTAzMjQ2ZmFhZWRhMDUxZjFhuFjN0Q==: 00:12:43.971 06:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 --hostid a979a798-a221-4879-b3c4-5aaa753fde06 -l 0 --dhchap-secret DHHC-1:01:MDQxNWY0NjIxNmZmYWNiODg3MWU4MWY0NDI1ZDEyNWMSLUvV: --dhchap-ctrl-secret DHHC-1:02:OTJiMzg0Y2FiZWJhMWQ0NmNjMzQ2ZDJhMmRmZGRhZTAzMjQ2ZmFhZWRhMDUxZjFhuFjN0Q==: 00:12:44.539 06:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:44.539 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:44.539 06:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 00:12:44.539 06:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.539 06:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:44.798 06:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.798 06:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:44.798 06:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:12:44.798 06:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:12:45.057 06:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:12:45.057 06:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:45.058 06:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:45.058 06:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:12:45.058 06:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:45.058 06:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:45.058 06:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:45.058 06:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.058 06:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:45.058 06:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.058 06:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:45.058 06:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:45.058 06:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:45.317 00:12:45.317 06:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:45.317 06:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:45.317 06:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:45.576 06:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:45.576 06:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:45.576 06:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:45.576 06:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:45.576 06:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:45.576 06:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:45.576 { 00:12:45.576 "cntlid": 85, 00:12:45.576 "qid": 0, 00:12:45.576 "state": "enabled", 00:12:45.576 "thread": "nvmf_tgt_poll_group_000", 00:12:45.576 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06", 00:12:45.576 "listen_address": { 00:12:45.576 "trtype": "TCP", 00:12:45.576 "adrfam": "IPv4", 00:12:45.576 "traddr": "10.0.0.3", 00:12:45.576 "trsvcid": "4420" 00:12:45.576 }, 00:12:45.576 "peer_address": { 00:12:45.576 "trtype": "TCP", 00:12:45.576 "adrfam": "IPv4", 00:12:45.576 "traddr": "10.0.0.1", 00:12:45.576 "trsvcid": "32910" 00:12:45.576 }, 00:12:45.576 "auth": { 00:12:45.576 "state": "completed", 00:12:45.576 "digest": "sha384", 00:12:45.576 "dhgroup": "ffdhe6144" 00:12:45.576 } 00:12:45.576 } 00:12:45.576 ]' 00:12:45.576 06:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:45.576 06:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:45.577 06:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:45.835 06:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:45.835 06:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:45.835 06:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:45.835 06:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:45.835 06:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:46.106 06:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ODNjNGFlNzJmMDc5Y2ViZDNhNGRhNTUyZTUyNzdhNmNiNjVjODY3Yzg0M2UxYjgwB9C67A==: --dhchap-ctrl-secret DHHC-1:01:MDBiN2FiMTE5ZGQ4MTViZjhhZDM3NDgxZDAwYzAyYjI0Vo/w: 00:12:46.106 06:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 --hostid a979a798-a221-4879-b3c4-5aaa753fde06 -l 0 --dhchap-secret DHHC-1:02:ODNjNGFlNzJmMDc5Y2ViZDNhNGRhNTUyZTUyNzdhNmNiNjVjODY3Yzg0M2UxYjgwB9C67A==: --dhchap-ctrl-secret DHHC-1:01:MDBiN2FiMTE5ZGQ4MTViZjhhZDM3NDgxZDAwYzAyYjI0Vo/w: 00:12:46.692 06:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:46.692 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:46.692 06:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 00:12:46.692 06:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.692 06:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:46.692 06:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.692 06:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:46.692 06:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:12:46.692 06:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:12:46.952 06:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:12:46.952 06:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:46.952 06:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:46.952 06:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:12:46.952 06:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:46.952 06:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:46.952 06:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 --dhchap-key key3 00:12:46.952 06:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.952 06:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:46.952 06:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:46.952 06:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:46.952 06:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:46.952 06:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:47.520 00:12:47.520 06:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:47.520 06:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:47.520 06:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:47.779 06:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:47.779 06:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:47.779 06:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.779 06:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:47.779 06:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.779 06:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:47.779 { 00:12:47.779 "cntlid": 87, 00:12:47.779 "qid": 0, 00:12:47.779 "state": "enabled", 00:12:47.779 "thread": "nvmf_tgt_poll_group_000", 00:12:47.779 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06", 00:12:47.779 "listen_address": { 00:12:47.779 "trtype": "TCP", 00:12:47.779 "adrfam": "IPv4", 00:12:47.779 "traddr": "10.0.0.3", 00:12:47.779 "trsvcid": "4420" 00:12:47.779 }, 00:12:47.779 "peer_address": { 00:12:47.779 "trtype": "TCP", 00:12:47.779 "adrfam": "IPv4", 00:12:47.779 "traddr": "10.0.0.1", 00:12:47.779 "trsvcid": "32944" 00:12:47.779 }, 00:12:47.779 "auth": { 00:12:47.779 "state": "completed", 00:12:47.779 "digest": "sha384", 00:12:47.779 "dhgroup": "ffdhe6144" 00:12:47.779 } 00:12:47.779 } 00:12:47.779 ]' 00:12:47.779 06:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:47.779 06:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:47.779 06:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:47.779 06:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:12:47.779 06:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:47.779 06:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:47.779 06:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:47.779 06:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:48.039 06:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjI4ZmRiM2QyMjVhY2RkMzZiYzllMmI0NDQ0OTFmZThiYmI1NmE0YjIxYmY2NmRmNWQyOWEwNzk5NWNiMjg0ZT7lCu4=: 00:12:48.039 06:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 --hostid a979a798-a221-4879-b3c4-5aaa753fde06 -l 0 --dhchap-secret DHHC-1:03:MjI4ZmRiM2QyMjVhY2RkMzZiYzllMmI0NDQ0OTFmZThiYmI1NmE0YjIxYmY2NmRmNWQyOWEwNzk5NWNiMjg0ZT7lCu4=: 00:12:48.978 06:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:48.978 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:48.978 06:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 00:12:48.978 06:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.978 06:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:48.978 06:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.978 06:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:48.978 06:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:48.978 06:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:48.978 06:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:48.978 06:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:12:48.978 06:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:48.978 06:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:48.978 06:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:12:48.978 06:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:48.978 06:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:48.978 06:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:48.978 06:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.978 06:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:48.978 06:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.978 06:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:48.978 06:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:48.978 06:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:49.546 00:12:49.806 06:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:49.806 06:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:49.806 06:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:49.806 06:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:49.806 06:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:49.806 06:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.806 06:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:49.806 06:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.806 06:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:49.806 { 00:12:49.806 "cntlid": 89, 00:12:49.806 "qid": 0, 00:12:49.806 "state": "enabled", 00:12:49.806 "thread": "nvmf_tgt_poll_group_000", 00:12:49.806 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06", 00:12:49.806 "listen_address": { 00:12:49.806 "trtype": "TCP", 00:12:49.806 "adrfam": "IPv4", 00:12:49.806 "traddr": "10.0.0.3", 00:12:49.806 "trsvcid": "4420" 00:12:49.806 }, 00:12:49.806 "peer_address": { 00:12:49.806 "trtype": "TCP", 00:12:49.806 "adrfam": "IPv4", 00:12:49.806 "traddr": "10.0.0.1", 00:12:49.806 "trsvcid": "32972" 00:12:49.806 }, 00:12:49.806 "auth": { 00:12:49.806 "state": "completed", 00:12:49.806 "digest": "sha384", 00:12:49.806 "dhgroup": "ffdhe8192" 00:12:49.806 } 00:12:49.806 } 00:12:49.806 ]' 00:12:50.065 06:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:50.065 06:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:50.065 06:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:50.065 06:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:50.065 06:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:50.065 06:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:50.065 06:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:50.065 06:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:50.325 06:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzRhZDI4NDFiZTY3NDI4YWVhYjE3NDMwZTFmNmE5YzcyYWYyODFmNDFmNjg0ZDBhKIyDNQ==: --dhchap-ctrl-secret DHHC-1:03:M2MzMzdkN2NhMzU0OWYxMjJhZjI5MWZhMjQzNjRjNTlhZTI5MjA3NzJhY2E5YjBkYWQ5OWFiODc2ZDVhNDAzMy2Hyy4=: 00:12:50.325 06:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 --hostid a979a798-a221-4879-b3c4-5aaa753fde06 -l 0 --dhchap-secret DHHC-1:00:MzRhZDI4NDFiZTY3NDI4YWVhYjE3NDMwZTFmNmE5YzcyYWYyODFmNDFmNjg0ZDBhKIyDNQ==: --dhchap-ctrl-secret DHHC-1:03:M2MzMzdkN2NhMzU0OWYxMjJhZjI5MWZhMjQzNjRjNTlhZTI5MjA3NzJhY2E5YjBkYWQ5OWFiODc2ZDVhNDAzMy2Hyy4=: 00:12:50.894 06:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:50.894 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:50.894 06:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 00:12:50.894 06:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.894 06:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:50.894 06:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.894 06:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:50.894 06:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:50.894 06:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:51.153 06:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:12:51.153 06:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:51.153 06:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:51.153 06:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:12:51.153 06:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:12:51.153 06:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:51.153 06:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:51.153 06:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.153 06:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:51.153 06:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.153 06:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:51.153 06:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:51.154 06:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:12:51.722 00:12:51.722 06:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:51.722 06:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:51.722 06:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:52.290 06:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:52.290 06:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:52.290 06:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.290 06:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:52.290 06:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.290 06:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:52.290 { 00:12:52.290 "cntlid": 91, 00:12:52.290 "qid": 0, 00:12:52.290 "state": "enabled", 00:12:52.290 "thread": "nvmf_tgt_poll_group_000", 00:12:52.290 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06", 00:12:52.290 "listen_address": { 00:12:52.290 "trtype": "TCP", 00:12:52.290 "adrfam": "IPv4", 00:12:52.290 "traddr": "10.0.0.3", 00:12:52.290 "trsvcid": "4420" 00:12:52.290 }, 00:12:52.290 "peer_address": { 00:12:52.290 "trtype": "TCP", 00:12:52.290 "adrfam": "IPv4", 00:12:52.290 "traddr": "10.0.0.1", 00:12:52.290 "trsvcid": "33010" 00:12:52.290 }, 00:12:52.290 "auth": { 00:12:52.290 "state": "completed", 00:12:52.290 "digest": "sha384", 00:12:52.290 "dhgroup": "ffdhe8192" 00:12:52.290 } 00:12:52.290 } 00:12:52.290 ]' 00:12:52.290 06:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:52.290 06:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:52.290 06:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:52.290 06:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:52.290 06:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:52.290 06:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:52.290 06:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:52.290 06:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:52.550 06:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MDQxNWY0NjIxNmZmYWNiODg3MWU4MWY0NDI1ZDEyNWMSLUvV: --dhchap-ctrl-secret DHHC-1:02:OTJiMzg0Y2FiZWJhMWQ0NmNjMzQ2ZDJhMmRmZGRhZTAzMjQ2ZmFhZWRhMDUxZjFhuFjN0Q==: 00:12:52.550 06:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 --hostid a979a798-a221-4879-b3c4-5aaa753fde06 -l 0 --dhchap-secret DHHC-1:01:MDQxNWY0NjIxNmZmYWNiODg3MWU4MWY0NDI1ZDEyNWMSLUvV: --dhchap-ctrl-secret DHHC-1:02:OTJiMzg0Y2FiZWJhMWQ0NmNjMzQ2ZDJhMmRmZGRhZTAzMjQ2ZmFhZWRhMDUxZjFhuFjN0Q==: 00:12:53.488 06:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:53.488 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:53.488 06:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 00:12:53.488 06:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.488 06:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:53.488 06:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.488 06:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:53.488 06:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:53.488 06:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:53.488 06:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:12:53.488 06:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:53.488 06:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:53.488 06:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:12:53.488 06:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:12:53.488 06:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:53.488 06:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:53.488 06:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.488 06:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:53.488 06:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.488 06:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:53.488 06:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:53.488 06:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:12:54.056 00:12:54.318 06:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:54.318 06:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:54.318 06:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:54.578 06:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:54.578 06:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:54.578 06:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.578 06:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:54.578 06:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.578 06:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:54.578 { 00:12:54.578 "cntlid": 93, 00:12:54.578 "qid": 0, 00:12:54.578 "state": "enabled", 00:12:54.578 "thread": "nvmf_tgt_poll_group_000", 00:12:54.578 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06", 00:12:54.578 "listen_address": { 00:12:54.578 "trtype": "TCP", 00:12:54.578 "adrfam": "IPv4", 00:12:54.578 "traddr": "10.0.0.3", 00:12:54.578 "trsvcid": "4420" 00:12:54.578 }, 00:12:54.578 "peer_address": { 00:12:54.578 "trtype": "TCP", 00:12:54.578 "adrfam": "IPv4", 00:12:54.578 "traddr": "10.0.0.1", 00:12:54.578 "trsvcid": "35666" 00:12:54.578 }, 00:12:54.578 "auth": { 00:12:54.578 "state": "completed", 00:12:54.578 "digest": "sha384", 00:12:54.578 "dhgroup": "ffdhe8192" 00:12:54.578 } 00:12:54.578 } 00:12:54.578 ]' 00:12:54.578 06:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:54.578 06:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:54.578 06:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:54.578 06:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:54.578 06:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:54.578 06:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:54.578 06:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:54.578 06:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:54.837 06:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ODNjNGFlNzJmMDc5Y2ViZDNhNGRhNTUyZTUyNzdhNmNiNjVjODY3Yzg0M2UxYjgwB9C67A==: --dhchap-ctrl-secret DHHC-1:01:MDBiN2FiMTE5ZGQ4MTViZjhhZDM3NDgxZDAwYzAyYjI0Vo/w: 00:12:54.837 06:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 --hostid a979a798-a221-4879-b3c4-5aaa753fde06 -l 0 --dhchap-secret DHHC-1:02:ODNjNGFlNzJmMDc5Y2ViZDNhNGRhNTUyZTUyNzdhNmNiNjVjODY3Yzg0M2UxYjgwB9C67A==: --dhchap-ctrl-secret DHHC-1:01:MDBiN2FiMTE5ZGQ4MTViZjhhZDM3NDgxZDAwYzAyYjI0Vo/w: 00:12:55.775 06:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:55.775 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:55.775 06:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 00:12:55.775 06:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.775 06:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:55.775 06:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.775 06:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:55.775 06:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:55.775 06:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:12:56.034 06:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:12:56.034 06:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:56.034 06:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:12:56.034 06:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:12:56.034 06:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:12:56.034 06:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:56.034 06:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 --dhchap-key key3 00:12:56.034 06:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.034 06:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:56.034 06:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.034 06:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:12:56.034 06:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:56.034 06:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:12:56.601 00:12:56.601 06:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:56.601 06:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:56.601 06:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:56.860 06:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:56.860 06:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:56.860 06:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.860 06:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:56.860 06:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.860 06:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:56.860 { 00:12:56.860 "cntlid": 95, 00:12:56.860 "qid": 0, 00:12:56.860 "state": "enabled", 00:12:56.860 "thread": "nvmf_tgt_poll_group_000", 00:12:56.860 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06", 00:12:56.860 "listen_address": { 00:12:56.860 "trtype": "TCP", 00:12:56.860 "adrfam": "IPv4", 00:12:56.860 "traddr": "10.0.0.3", 00:12:56.860 "trsvcid": "4420" 00:12:56.860 }, 00:12:56.860 "peer_address": { 00:12:56.860 "trtype": "TCP", 00:12:56.860 "adrfam": "IPv4", 00:12:56.860 "traddr": "10.0.0.1", 00:12:56.860 "trsvcid": "35698" 00:12:56.860 }, 00:12:56.860 "auth": { 00:12:56.860 "state": "completed", 00:12:56.860 "digest": "sha384", 00:12:56.860 "dhgroup": "ffdhe8192" 00:12:56.860 } 00:12:56.860 } 00:12:56.860 ]' 00:12:56.860 06:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:56.860 06:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:12:56.861 06:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:56.861 06:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:12:56.861 06:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:57.126 06:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:57.126 06:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:57.126 06:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:57.126 06:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjI4ZmRiM2QyMjVhY2RkMzZiYzllMmI0NDQ0OTFmZThiYmI1NmE0YjIxYmY2NmRmNWQyOWEwNzk5NWNiMjg0ZT7lCu4=: 00:12:57.126 06:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 --hostid a979a798-a221-4879-b3c4-5aaa753fde06 -l 0 --dhchap-secret DHHC-1:03:MjI4ZmRiM2QyMjVhY2RkMzZiYzllMmI0NDQ0OTFmZThiYmI1NmE0YjIxYmY2NmRmNWQyOWEwNzk5NWNiMjg0ZT7lCu4=: 00:12:58.074 06:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:58.074 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:58.074 06:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 00:12:58.074 06:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.074 06:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:58.074 06:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.075 06:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:12:58.075 06:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:12:58.075 06:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:58.075 06:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:58.075 06:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:58.333 06:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:12:58.333 06:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:12:58.333 06:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:12:58.333 06:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:12:58.333 06:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:12:58.333 06:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:12:58.333 06:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:58.333 06:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.333 06:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:58.333 06:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.333 06:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:58.333 06:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:58.333 06:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:12:58.593 00:12:58.593 06:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:12:58.593 06:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:12:58.593 06:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:12:58.851 06:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:12:58.851 06:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:12:58.851 06:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.851 06:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:58.851 06:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.851 06:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:12:58.851 { 00:12:58.851 "cntlid": 97, 00:12:58.851 "qid": 0, 00:12:58.851 "state": "enabled", 00:12:58.851 "thread": "nvmf_tgt_poll_group_000", 00:12:58.851 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06", 00:12:58.851 "listen_address": { 00:12:58.852 "trtype": "TCP", 00:12:58.852 "adrfam": "IPv4", 00:12:58.852 "traddr": "10.0.0.3", 00:12:58.852 "trsvcid": "4420" 00:12:58.852 }, 00:12:58.852 "peer_address": { 00:12:58.852 "trtype": "TCP", 00:12:58.852 "adrfam": "IPv4", 00:12:58.852 "traddr": "10.0.0.1", 00:12:58.852 "trsvcid": "35730" 00:12:58.852 }, 00:12:58.852 "auth": { 00:12:58.852 "state": "completed", 00:12:58.852 "digest": "sha512", 00:12:58.852 "dhgroup": "null" 00:12:58.852 } 00:12:58.852 } 00:12:58.852 ]' 00:12:58.852 06:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:12:58.852 06:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:12:58.852 06:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:12:58.852 06:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:12:58.852 06:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:12:59.110 06:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:12:59.110 06:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:12:59.110 06:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:12:59.369 06:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzRhZDI4NDFiZTY3NDI4YWVhYjE3NDMwZTFmNmE5YzcyYWYyODFmNDFmNjg0ZDBhKIyDNQ==: --dhchap-ctrl-secret DHHC-1:03:M2MzMzdkN2NhMzU0OWYxMjJhZjI5MWZhMjQzNjRjNTlhZTI5MjA3NzJhY2E5YjBkYWQ5OWFiODc2ZDVhNDAzMy2Hyy4=: 00:12:59.369 06:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 --hostid a979a798-a221-4879-b3c4-5aaa753fde06 -l 0 --dhchap-secret DHHC-1:00:MzRhZDI4NDFiZTY3NDI4YWVhYjE3NDMwZTFmNmE5YzcyYWYyODFmNDFmNjg0ZDBhKIyDNQ==: --dhchap-ctrl-secret DHHC-1:03:M2MzMzdkN2NhMzU0OWYxMjJhZjI5MWZhMjQzNjRjNTlhZTI5MjA3NzJhY2E5YjBkYWQ5OWFiODc2ZDVhNDAzMy2Hyy4=: 00:12:59.936 06:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:12:59.936 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:12:59.936 06:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 00:12:59.936 06:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:59.936 06:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:12:59.936 06:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:59.936 06:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:12:59.936 06:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:12:59.936 06:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:13:00.196 06:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:13:00.196 06:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:00.196 06:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:00.196 06:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:13:00.196 06:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:00.196 06:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:00.196 06:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:00.196 06:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.196 06:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:00.196 06:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.196 06:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:00.196 06:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:00.196 06:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:00.455 00:13:00.455 06:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:00.455 06:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:00.455 06:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:00.714 06:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:00.714 06:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:00.714 06:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:00.714 06:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:00.714 06:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:00.714 06:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:00.714 { 00:13:00.714 "cntlid": 99, 00:13:00.714 "qid": 0, 00:13:00.714 "state": "enabled", 00:13:00.714 "thread": "nvmf_tgt_poll_group_000", 00:13:00.714 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06", 00:13:00.714 "listen_address": { 00:13:00.714 "trtype": "TCP", 00:13:00.714 "adrfam": "IPv4", 00:13:00.714 "traddr": "10.0.0.3", 00:13:00.714 "trsvcid": "4420" 00:13:00.714 }, 00:13:00.714 "peer_address": { 00:13:00.714 "trtype": "TCP", 00:13:00.714 "adrfam": "IPv4", 00:13:00.714 "traddr": "10.0.0.1", 00:13:00.714 "trsvcid": "35756" 00:13:00.714 }, 00:13:00.714 "auth": { 00:13:00.714 "state": "completed", 00:13:00.714 "digest": "sha512", 00:13:00.714 "dhgroup": "null" 00:13:00.714 } 00:13:00.714 } 00:13:00.714 ]' 00:13:00.714 06:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:00.714 06:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:00.714 06:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:00.973 06:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:13:00.974 06:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:00.974 06:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:00.974 06:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:00.974 06:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:01.233 06:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MDQxNWY0NjIxNmZmYWNiODg3MWU4MWY0NDI1ZDEyNWMSLUvV: --dhchap-ctrl-secret DHHC-1:02:OTJiMzg0Y2FiZWJhMWQ0NmNjMzQ2ZDJhMmRmZGRhZTAzMjQ2ZmFhZWRhMDUxZjFhuFjN0Q==: 00:13:01.233 06:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 --hostid a979a798-a221-4879-b3c4-5aaa753fde06 -l 0 --dhchap-secret DHHC-1:01:MDQxNWY0NjIxNmZmYWNiODg3MWU4MWY0NDI1ZDEyNWMSLUvV: --dhchap-ctrl-secret DHHC-1:02:OTJiMzg0Y2FiZWJhMWQ0NmNjMzQ2ZDJhMmRmZGRhZTAzMjQ2ZmFhZWRhMDUxZjFhuFjN0Q==: 00:13:01.799 06:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:01.799 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:01.799 06:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 00:13:01.799 06:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.799 06:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:01.799 06:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.799 06:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:01.800 06:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:13:01.800 06:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:13:02.058 06:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:13:02.058 06:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:02.058 06:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:02.058 06:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:13:02.058 06:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:02.058 06:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:02.058 06:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:02.058 06:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.058 06:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:02.058 06:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.058 06:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:02.058 06:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:02.058 06:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:02.625 00:13:02.625 06:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:02.625 06:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:02.625 06:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:02.884 06:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:02.884 06:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:02.884 06:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.884 06:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:02.884 06:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.884 06:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:02.884 { 00:13:02.884 "cntlid": 101, 00:13:02.884 "qid": 0, 00:13:02.884 "state": "enabled", 00:13:02.884 "thread": "nvmf_tgt_poll_group_000", 00:13:02.884 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06", 00:13:02.884 "listen_address": { 00:13:02.884 "trtype": "TCP", 00:13:02.884 "adrfam": "IPv4", 00:13:02.884 "traddr": "10.0.0.3", 00:13:02.884 "trsvcid": "4420" 00:13:02.884 }, 00:13:02.884 "peer_address": { 00:13:02.884 "trtype": "TCP", 00:13:02.884 "adrfam": "IPv4", 00:13:02.884 "traddr": "10.0.0.1", 00:13:02.884 "trsvcid": "35788" 00:13:02.884 }, 00:13:02.884 "auth": { 00:13:02.884 "state": "completed", 00:13:02.884 "digest": "sha512", 00:13:02.884 "dhgroup": "null" 00:13:02.884 } 00:13:02.884 } 00:13:02.884 ]' 00:13:02.884 06:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:02.884 06:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:02.884 06:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:02.884 06:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:13:02.884 06:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:02.884 06:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:02.884 06:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:02.884 06:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:03.142 06:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ODNjNGFlNzJmMDc5Y2ViZDNhNGRhNTUyZTUyNzdhNmNiNjVjODY3Yzg0M2UxYjgwB9C67A==: --dhchap-ctrl-secret DHHC-1:01:MDBiN2FiMTE5ZGQ4MTViZjhhZDM3NDgxZDAwYzAyYjI0Vo/w: 00:13:03.142 06:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 --hostid a979a798-a221-4879-b3c4-5aaa753fde06 -l 0 --dhchap-secret DHHC-1:02:ODNjNGFlNzJmMDc5Y2ViZDNhNGRhNTUyZTUyNzdhNmNiNjVjODY3Yzg0M2UxYjgwB9C67A==: --dhchap-ctrl-secret DHHC-1:01:MDBiN2FiMTE5ZGQ4MTViZjhhZDM3NDgxZDAwYzAyYjI0Vo/w: 00:13:03.708 06:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:03.708 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:03.708 06:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 00:13:03.708 06:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.708 06:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:03.708 06:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.708 06:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:03.708 06:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:13:03.709 06:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:13:03.967 06:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:13:03.967 06:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:03.967 06:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:03.967 06:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:13:03.967 06:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:03.967 06:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:03.967 06:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 --dhchap-key key3 00:13:03.967 06:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.967 06:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:04.227 06:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.227 06:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:04.227 06:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:04.227 06:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:04.486 00:13:04.486 06:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:04.486 06:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:04.486 06:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:04.744 06:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:04.745 06:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:04.745 06:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:04.745 06:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:04.745 06:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:04.745 06:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:04.745 { 00:13:04.745 "cntlid": 103, 00:13:04.745 "qid": 0, 00:13:04.745 "state": "enabled", 00:13:04.745 "thread": "nvmf_tgt_poll_group_000", 00:13:04.745 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06", 00:13:04.745 "listen_address": { 00:13:04.745 "trtype": "TCP", 00:13:04.745 "adrfam": "IPv4", 00:13:04.745 "traddr": "10.0.0.3", 00:13:04.745 "trsvcid": "4420" 00:13:04.745 }, 00:13:04.745 "peer_address": { 00:13:04.745 "trtype": "TCP", 00:13:04.745 "adrfam": "IPv4", 00:13:04.745 "traddr": "10.0.0.1", 00:13:04.745 "trsvcid": "34352" 00:13:04.745 }, 00:13:04.745 "auth": { 00:13:04.745 "state": "completed", 00:13:04.745 "digest": "sha512", 00:13:04.745 "dhgroup": "null" 00:13:04.745 } 00:13:04.745 } 00:13:04.745 ]' 00:13:04.745 06:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:04.745 06:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:04.745 06:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:04.745 06:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:13:04.745 06:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:04.745 06:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:04.745 06:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:04.745 06:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:05.004 06:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjI4ZmRiM2QyMjVhY2RkMzZiYzllMmI0NDQ0OTFmZThiYmI1NmE0YjIxYmY2NmRmNWQyOWEwNzk5NWNiMjg0ZT7lCu4=: 00:13:05.004 06:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 --hostid a979a798-a221-4879-b3c4-5aaa753fde06 -l 0 --dhchap-secret DHHC-1:03:MjI4ZmRiM2QyMjVhY2RkMzZiYzllMmI0NDQ0OTFmZThiYmI1NmE0YjIxYmY2NmRmNWQyOWEwNzk5NWNiMjg0ZT7lCu4=: 00:13:05.572 06:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:05.573 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:05.573 06:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 00:13:05.573 06:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:05.573 06:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:05.573 06:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:05.573 06:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:05.573 06:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:05.573 06:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:13:05.573 06:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:13:05.831 06:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:13:05.831 06:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:05.831 06:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:05.832 06:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:13:05.832 06:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:05.832 06:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:05.832 06:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:05.832 06:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:05.832 06:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:05.832 06:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:05.832 06:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:05.832 06:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:05.832 06:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:06.410 00:13:06.410 06:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:06.410 06:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:06.410 06:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:06.670 06:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:06.670 06:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:06.670 06:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.670 06:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:06.670 06:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.670 06:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:06.670 { 00:13:06.670 "cntlid": 105, 00:13:06.670 "qid": 0, 00:13:06.670 "state": "enabled", 00:13:06.670 "thread": "nvmf_tgt_poll_group_000", 00:13:06.670 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06", 00:13:06.670 "listen_address": { 00:13:06.670 "trtype": "TCP", 00:13:06.670 "adrfam": "IPv4", 00:13:06.670 "traddr": "10.0.0.3", 00:13:06.670 "trsvcid": "4420" 00:13:06.670 }, 00:13:06.670 "peer_address": { 00:13:06.670 "trtype": "TCP", 00:13:06.670 "adrfam": "IPv4", 00:13:06.670 "traddr": "10.0.0.1", 00:13:06.670 "trsvcid": "34366" 00:13:06.670 }, 00:13:06.670 "auth": { 00:13:06.670 "state": "completed", 00:13:06.670 "digest": "sha512", 00:13:06.670 "dhgroup": "ffdhe2048" 00:13:06.670 } 00:13:06.670 } 00:13:06.670 ]' 00:13:06.670 06:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:06.670 06:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:06.670 06:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:06.670 06:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:06.670 06:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:06.670 06:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:06.670 06:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:06.670 06:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:06.930 06:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzRhZDI4NDFiZTY3NDI4YWVhYjE3NDMwZTFmNmE5YzcyYWYyODFmNDFmNjg0ZDBhKIyDNQ==: --dhchap-ctrl-secret DHHC-1:03:M2MzMzdkN2NhMzU0OWYxMjJhZjI5MWZhMjQzNjRjNTlhZTI5MjA3NzJhY2E5YjBkYWQ5OWFiODc2ZDVhNDAzMy2Hyy4=: 00:13:06.930 06:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 --hostid a979a798-a221-4879-b3c4-5aaa753fde06 -l 0 --dhchap-secret DHHC-1:00:MzRhZDI4NDFiZTY3NDI4YWVhYjE3NDMwZTFmNmE5YzcyYWYyODFmNDFmNjg0ZDBhKIyDNQ==: --dhchap-ctrl-secret DHHC-1:03:M2MzMzdkN2NhMzU0OWYxMjJhZjI5MWZhMjQzNjRjNTlhZTI5MjA3NzJhY2E5YjBkYWQ5OWFiODc2ZDVhNDAzMy2Hyy4=: 00:13:07.498 06:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:07.498 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:07.498 06:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 00:13:07.498 06:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.498 06:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:07.498 06:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.498 06:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:07.498 06:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:13:07.498 06:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:13:08.067 06:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:13:08.067 06:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:08.067 06:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:08.067 06:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:13:08.067 06:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:08.067 06:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:08.067 06:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:08.067 06:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.067 06:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:08.067 06:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.067 06:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:08.067 06:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:08.067 06:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:08.326 00:13:08.326 06:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:08.326 06:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:08.326 06:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:08.586 06:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:08.586 06:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:08.586 06:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:08.586 06:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:08.586 06:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:08.586 06:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:08.586 { 00:13:08.586 "cntlid": 107, 00:13:08.586 "qid": 0, 00:13:08.586 "state": "enabled", 00:13:08.586 "thread": "nvmf_tgt_poll_group_000", 00:13:08.586 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06", 00:13:08.586 "listen_address": { 00:13:08.586 "trtype": "TCP", 00:13:08.586 "adrfam": "IPv4", 00:13:08.586 "traddr": "10.0.0.3", 00:13:08.586 "trsvcid": "4420" 00:13:08.586 }, 00:13:08.586 "peer_address": { 00:13:08.586 "trtype": "TCP", 00:13:08.586 "adrfam": "IPv4", 00:13:08.586 "traddr": "10.0.0.1", 00:13:08.586 "trsvcid": "34380" 00:13:08.586 }, 00:13:08.586 "auth": { 00:13:08.586 "state": "completed", 00:13:08.586 "digest": "sha512", 00:13:08.586 "dhgroup": "ffdhe2048" 00:13:08.586 } 00:13:08.586 } 00:13:08.586 ]' 00:13:08.586 06:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:08.586 06:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:08.586 06:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:08.586 06:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:08.586 06:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:08.586 06:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:08.586 06:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:08.586 06:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:08.846 06:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MDQxNWY0NjIxNmZmYWNiODg3MWU4MWY0NDI1ZDEyNWMSLUvV: --dhchap-ctrl-secret DHHC-1:02:OTJiMzg0Y2FiZWJhMWQ0NmNjMzQ2ZDJhMmRmZGRhZTAzMjQ2ZmFhZWRhMDUxZjFhuFjN0Q==: 00:13:08.846 06:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 --hostid a979a798-a221-4879-b3c4-5aaa753fde06 -l 0 --dhchap-secret DHHC-1:01:MDQxNWY0NjIxNmZmYWNiODg3MWU4MWY0NDI1ZDEyNWMSLUvV: --dhchap-ctrl-secret DHHC-1:02:OTJiMzg0Y2FiZWJhMWQ0NmNjMzQ2ZDJhMmRmZGRhZTAzMjQ2ZmFhZWRhMDUxZjFhuFjN0Q==: 00:13:09.779 06:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:09.779 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:09.779 06:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 00:13:09.779 06:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.779 06:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:09.779 06:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.779 06:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:09.779 06:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:13:09.780 06:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:13:10.037 06:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:13:10.037 06:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:10.037 06:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:10.037 06:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:13:10.037 06:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:10.037 06:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:10.037 06:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:10.037 06:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.037 06:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:10.037 06:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.037 06:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:10.037 06:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:10.037 06:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:10.295 00:13:10.295 06:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:10.295 06:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:10.295 06:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:10.553 06:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:10.553 06:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:10.553 06:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:10.553 06:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:10.553 06:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:10.553 06:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:10.553 { 00:13:10.553 "cntlid": 109, 00:13:10.553 "qid": 0, 00:13:10.553 "state": "enabled", 00:13:10.553 "thread": "nvmf_tgt_poll_group_000", 00:13:10.553 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06", 00:13:10.553 "listen_address": { 00:13:10.553 "trtype": "TCP", 00:13:10.553 "adrfam": "IPv4", 00:13:10.553 "traddr": "10.0.0.3", 00:13:10.553 "trsvcid": "4420" 00:13:10.553 }, 00:13:10.553 "peer_address": { 00:13:10.553 "trtype": "TCP", 00:13:10.553 "adrfam": "IPv4", 00:13:10.553 "traddr": "10.0.0.1", 00:13:10.553 "trsvcid": "34408" 00:13:10.553 }, 00:13:10.553 "auth": { 00:13:10.553 "state": "completed", 00:13:10.553 "digest": "sha512", 00:13:10.553 "dhgroup": "ffdhe2048" 00:13:10.553 } 00:13:10.553 } 00:13:10.553 ]' 00:13:10.553 06:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:10.553 06:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:10.553 06:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:10.553 06:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:10.553 06:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:10.811 06:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:10.811 06:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:10.811 06:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:11.069 06:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ODNjNGFlNzJmMDc5Y2ViZDNhNGRhNTUyZTUyNzdhNmNiNjVjODY3Yzg0M2UxYjgwB9C67A==: --dhchap-ctrl-secret DHHC-1:01:MDBiN2FiMTE5ZGQ4MTViZjhhZDM3NDgxZDAwYzAyYjI0Vo/w: 00:13:11.070 06:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 --hostid a979a798-a221-4879-b3c4-5aaa753fde06 -l 0 --dhchap-secret DHHC-1:02:ODNjNGFlNzJmMDc5Y2ViZDNhNGRhNTUyZTUyNzdhNmNiNjVjODY3Yzg0M2UxYjgwB9C67A==: --dhchap-ctrl-secret DHHC-1:01:MDBiN2FiMTE5ZGQ4MTViZjhhZDM3NDgxZDAwYzAyYjI0Vo/w: 00:13:11.637 06:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:11.637 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:11.637 06:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 00:13:11.637 06:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.637 06:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:11.637 06:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.637 06:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:11.637 06:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:13:11.637 06:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:13:11.896 06:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:13:11.896 06:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:11.896 06:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:11.896 06:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:13:11.896 06:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:11.896 06:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:11.896 06:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 --dhchap-key key3 00:13:11.896 06:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.896 06:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:11.896 06:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.896 06:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:11.896 06:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:11.896 06:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:12.155 00:13:12.155 06:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:12.155 06:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:12.155 06:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:12.414 06:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:12.415 06:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:12.415 06:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.415 06:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:12.415 06:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.415 06:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:12.415 { 00:13:12.415 "cntlid": 111, 00:13:12.415 "qid": 0, 00:13:12.415 "state": "enabled", 00:13:12.415 "thread": "nvmf_tgt_poll_group_000", 00:13:12.415 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06", 00:13:12.415 "listen_address": { 00:13:12.415 "trtype": "TCP", 00:13:12.415 "adrfam": "IPv4", 00:13:12.415 "traddr": "10.0.0.3", 00:13:12.415 "trsvcid": "4420" 00:13:12.415 }, 00:13:12.415 "peer_address": { 00:13:12.415 "trtype": "TCP", 00:13:12.415 "adrfam": "IPv4", 00:13:12.415 "traddr": "10.0.0.1", 00:13:12.415 "trsvcid": "34432" 00:13:12.415 }, 00:13:12.415 "auth": { 00:13:12.415 "state": "completed", 00:13:12.415 "digest": "sha512", 00:13:12.415 "dhgroup": "ffdhe2048" 00:13:12.415 } 00:13:12.415 } 00:13:12.415 ]' 00:13:12.415 06:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:12.674 06:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:12.674 06:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:12.674 06:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:12.674 06:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:12.674 06:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:12.674 06:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:12.674 06:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:12.933 06:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjI4ZmRiM2QyMjVhY2RkMzZiYzllMmI0NDQ0OTFmZThiYmI1NmE0YjIxYmY2NmRmNWQyOWEwNzk5NWNiMjg0ZT7lCu4=: 00:13:12.933 06:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 --hostid a979a798-a221-4879-b3c4-5aaa753fde06 -l 0 --dhchap-secret DHHC-1:03:MjI4ZmRiM2QyMjVhY2RkMzZiYzllMmI0NDQ0OTFmZThiYmI1NmE0YjIxYmY2NmRmNWQyOWEwNzk5NWNiMjg0ZT7lCu4=: 00:13:13.501 06:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:13.501 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:13.501 06:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 00:13:13.501 06:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.501 06:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:13.501 06:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.501 06:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:13.501 06:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:13.501 06:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:13:13.501 06:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:13:13.760 06:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:13:13.760 06:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:13.760 06:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:13.760 06:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:13:13.760 06:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:13.760 06:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:13.760 06:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:13.760 06:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.760 06:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:13.760 06:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.760 06:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:13.760 06:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:13.760 06:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:14.328 00:13:14.328 06:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:14.328 06:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:14.328 06:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:14.587 06:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:14.587 06:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:14.587 06:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.587 06:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:14.587 06:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.587 06:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:14.587 { 00:13:14.587 "cntlid": 113, 00:13:14.587 "qid": 0, 00:13:14.587 "state": "enabled", 00:13:14.587 "thread": "nvmf_tgt_poll_group_000", 00:13:14.587 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06", 00:13:14.587 "listen_address": { 00:13:14.587 "trtype": "TCP", 00:13:14.587 "adrfam": "IPv4", 00:13:14.587 "traddr": "10.0.0.3", 00:13:14.587 "trsvcid": "4420" 00:13:14.587 }, 00:13:14.587 "peer_address": { 00:13:14.587 "trtype": "TCP", 00:13:14.587 "adrfam": "IPv4", 00:13:14.587 "traddr": "10.0.0.1", 00:13:14.587 "trsvcid": "57438" 00:13:14.587 }, 00:13:14.587 "auth": { 00:13:14.587 "state": "completed", 00:13:14.587 "digest": "sha512", 00:13:14.587 "dhgroup": "ffdhe3072" 00:13:14.587 } 00:13:14.587 } 00:13:14.587 ]' 00:13:14.587 06:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:14.587 06:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:14.587 06:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:14.587 06:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:14.587 06:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:14.846 06:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:14.846 06:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:14.846 06:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:15.104 06:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzRhZDI4NDFiZTY3NDI4YWVhYjE3NDMwZTFmNmE5YzcyYWYyODFmNDFmNjg0ZDBhKIyDNQ==: --dhchap-ctrl-secret DHHC-1:03:M2MzMzdkN2NhMzU0OWYxMjJhZjI5MWZhMjQzNjRjNTlhZTI5MjA3NzJhY2E5YjBkYWQ5OWFiODc2ZDVhNDAzMy2Hyy4=: 00:13:15.104 06:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 --hostid a979a798-a221-4879-b3c4-5aaa753fde06 -l 0 --dhchap-secret DHHC-1:00:MzRhZDI4NDFiZTY3NDI4YWVhYjE3NDMwZTFmNmE5YzcyYWYyODFmNDFmNjg0ZDBhKIyDNQ==: --dhchap-ctrl-secret DHHC-1:03:M2MzMzdkN2NhMzU0OWYxMjJhZjI5MWZhMjQzNjRjNTlhZTI5MjA3NzJhY2E5YjBkYWQ5OWFiODc2ZDVhNDAzMy2Hyy4=: 00:13:15.732 06:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:15.732 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:15.732 06:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 00:13:15.732 06:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.732 06:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:15.732 06:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.732 06:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:15.732 06:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:13:15.732 06:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:13:16.006 06:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:13:16.006 06:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:16.006 06:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:16.006 06:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:13:16.006 06:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:16.006 06:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:16.006 06:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:16.006 06:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.006 06:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:16.006 06:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.006 06:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:16.006 06:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:16.006 06:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:16.574 00:13:16.574 06:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:16.574 06:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:16.574 06:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:16.574 06:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:16.574 06:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:16.574 06:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.574 06:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:16.834 06:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.834 06:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:16.834 { 00:13:16.834 "cntlid": 115, 00:13:16.834 "qid": 0, 00:13:16.834 "state": "enabled", 00:13:16.834 "thread": "nvmf_tgt_poll_group_000", 00:13:16.834 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06", 00:13:16.834 "listen_address": { 00:13:16.834 "trtype": "TCP", 00:13:16.834 "adrfam": "IPv4", 00:13:16.834 "traddr": "10.0.0.3", 00:13:16.834 "trsvcid": "4420" 00:13:16.834 }, 00:13:16.834 "peer_address": { 00:13:16.834 "trtype": "TCP", 00:13:16.834 "adrfam": "IPv4", 00:13:16.834 "traddr": "10.0.0.1", 00:13:16.834 "trsvcid": "57466" 00:13:16.834 }, 00:13:16.834 "auth": { 00:13:16.834 "state": "completed", 00:13:16.834 "digest": "sha512", 00:13:16.834 "dhgroup": "ffdhe3072" 00:13:16.834 } 00:13:16.834 } 00:13:16.834 ]' 00:13:16.834 06:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:16.834 06:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:16.834 06:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:16.834 06:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:16.834 06:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:16.834 06:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:16.834 06:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:16.834 06:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:17.093 06:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MDQxNWY0NjIxNmZmYWNiODg3MWU4MWY0NDI1ZDEyNWMSLUvV: --dhchap-ctrl-secret DHHC-1:02:OTJiMzg0Y2FiZWJhMWQ0NmNjMzQ2ZDJhMmRmZGRhZTAzMjQ2ZmFhZWRhMDUxZjFhuFjN0Q==: 00:13:17.093 06:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 --hostid a979a798-a221-4879-b3c4-5aaa753fde06 -l 0 --dhchap-secret DHHC-1:01:MDQxNWY0NjIxNmZmYWNiODg3MWU4MWY0NDI1ZDEyNWMSLUvV: --dhchap-ctrl-secret DHHC-1:02:OTJiMzg0Y2FiZWJhMWQ0NmNjMzQ2ZDJhMmRmZGRhZTAzMjQ2ZmFhZWRhMDUxZjFhuFjN0Q==: 00:13:17.661 06:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:17.661 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:17.661 06:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 00:13:17.661 06:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.661 06:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:17.661 06:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.661 06:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:17.661 06:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:13:17.661 06:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:13:17.920 06:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:13:17.920 06:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:17.920 06:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:17.920 06:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:13:17.920 06:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:17.920 06:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:17.920 06:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:17.920 06:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.920 06:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:17.920 06:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.920 06:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:17.920 06:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:17.920 06:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:18.488 00:13:18.488 06:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:18.488 06:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:18.488 06:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:18.747 06:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:18.747 06:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:18.747 06:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.747 06:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:18.747 06:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.747 06:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:18.747 { 00:13:18.747 "cntlid": 117, 00:13:18.747 "qid": 0, 00:13:18.747 "state": "enabled", 00:13:18.747 "thread": "nvmf_tgt_poll_group_000", 00:13:18.747 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06", 00:13:18.747 "listen_address": { 00:13:18.747 "trtype": "TCP", 00:13:18.747 "adrfam": "IPv4", 00:13:18.747 "traddr": "10.0.0.3", 00:13:18.747 "trsvcid": "4420" 00:13:18.747 }, 00:13:18.747 "peer_address": { 00:13:18.747 "trtype": "TCP", 00:13:18.747 "adrfam": "IPv4", 00:13:18.747 "traddr": "10.0.0.1", 00:13:18.747 "trsvcid": "57480" 00:13:18.747 }, 00:13:18.747 "auth": { 00:13:18.747 "state": "completed", 00:13:18.747 "digest": "sha512", 00:13:18.747 "dhgroup": "ffdhe3072" 00:13:18.747 } 00:13:18.747 } 00:13:18.747 ]' 00:13:18.747 06:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:18.747 06:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:18.747 06:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:18.747 06:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:18.747 06:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:18.747 06:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:18.747 06:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:18.747 06:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:19.006 06:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ODNjNGFlNzJmMDc5Y2ViZDNhNGRhNTUyZTUyNzdhNmNiNjVjODY3Yzg0M2UxYjgwB9C67A==: --dhchap-ctrl-secret DHHC-1:01:MDBiN2FiMTE5ZGQ4MTViZjhhZDM3NDgxZDAwYzAyYjI0Vo/w: 00:13:19.006 06:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 --hostid a979a798-a221-4879-b3c4-5aaa753fde06 -l 0 --dhchap-secret DHHC-1:02:ODNjNGFlNzJmMDc5Y2ViZDNhNGRhNTUyZTUyNzdhNmNiNjVjODY3Yzg0M2UxYjgwB9C67A==: --dhchap-ctrl-secret DHHC-1:01:MDBiN2FiMTE5ZGQ4MTViZjhhZDM3NDgxZDAwYzAyYjI0Vo/w: 00:13:19.944 06:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:19.944 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:19.944 06:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 00:13:19.944 06:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.944 06:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:19.944 06:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.944 06:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:19.944 06:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:13:19.944 06:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:13:19.944 06:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:13:19.944 06:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:19.944 06:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:19.944 06:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:13:19.944 06:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:19.944 06:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:19.944 06:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 --dhchap-key key3 00:13:19.944 06:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.944 06:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:19.944 06:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.944 06:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:19.944 06:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:19.945 06:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:20.513 00:13:20.513 06:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:20.513 06:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:20.513 06:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:20.772 06:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:20.772 06:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:20.772 06:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.772 06:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:20.772 06:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.772 06:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:20.772 { 00:13:20.772 "cntlid": 119, 00:13:20.772 "qid": 0, 00:13:20.772 "state": "enabled", 00:13:20.772 "thread": "nvmf_tgt_poll_group_000", 00:13:20.772 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06", 00:13:20.772 "listen_address": { 00:13:20.772 "trtype": "TCP", 00:13:20.772 "adrfam": "IPv4", 00:13:20.772 "traddr": "10.0.0.3", 00:13:20.772 "trsvcid": "4420" 00:13:20.772 }, 00:13:20.772 "peer_address": { 00:13:20.772 "trtype": "TCP", 00:13:20.772 "adrfam": "IPv4", 00:13:20.772 "traddr": "10.0.0.1", 00:13:20.772 "trsvcid": "57516" 00:13:20.772 }, 00:13:20.772 "auth": { 00:13:20.772 "state": "completed", 00:13:20.772 "digest": "sha512", 00:13:20.772 "dhgroup": "ffdhe3072" 00:13:20.772 } 00:13:20.772 } 00:13:20.772 ]' 00:13:20.772 06:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:20.772 06:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:20.773 06:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:20.773 06:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:20.773 06:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:20.773 06:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:20.773 06:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:20.773 06:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:21.031 06:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjI4ZmRiM2QyMjVhY2RkMzZiYzllMmI0NDQ0OTFmZThiYmI1NmE0YjIxYmY2NmRmNWQyOWEwNzk5NWNiMjg0ZT7lCu4=: 00:13:21.031 06:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 --hostid a979a798-a221-4879-b3c4-5aaa753fde06 -l 0 --dhchap-secret DHHC-1:03:MjI4ZmRiM2QyMjVhY2RkMzZiYzllMmI0NDQ0OTFmZThiYmI1NmE0YjIxYmY2NmRmNWQyOWEwNzk5NWNiMjg0ZT7lCu4=: 00:13:21.599 06:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:21.599 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:21.599 06:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 00:13:21.599 06:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.599 06:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:21.599 06:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.599 06:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:21.599 06:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:21.599 06:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:21.599 06:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:22.165 06:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:13:22.165 06:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:22.165 06:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:22.165 06:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:13:22.165 06:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:22.165 06:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:22.165 06:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:22.165 06:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.165 06:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:22.165 06:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.165 06:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:22.165 06:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:22.165 06:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:22.423 00:13:22.423 06:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:22.423 06:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:22.423 06:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:22.682 06:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:22.682 06:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:22.682 06:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.682 06:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:22.682 06:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.682 06:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:22.682 { 00:13:22.682 "cntlid": 121, 00:13:22.682 "qid": 0, 00:13:22.682 "state": "enabled", 00:13:22.682 "thread": "nvmf_tgt_poll_group_000", 00:13:22.682 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06", 00:13:22.682 "listen_address": { 00:13:22.682 "trtype": "TCP", 00:13:22.682 "adrfam": "IPv4", 00:13:22.682 "traddr": "10.0.0.3", 00:13:22.682 "trsvcid": "4420" 00:13:22.682 }, 00:13:22.682 "peer_address": { 00:13:22.682 "trtype": "TCP", 00:13:22.682 "adrfam": "IPv4", 00:13:22.682 "traddr": "10.0.0.1", 00:13:22.682 "trsvcid": "57554" 00:13:22.682 }, 00:13:22.682 "auth": { 00:13:22.682 "state": "completed", 00:13:22.682 "digest": "sha512", 00:13:22.682 "dhgroup": "ffdhe4096" 00:13:22.682 } 00:13:22.682 } 00:13:22.682 ]' 00:13:22.682 06:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:22.682 06:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:22.682 06:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:22.682 06:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:22.683 06:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:22.683 06:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:22.683 06:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:22.683 06:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:22.941 06:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzRhZDI4NDFiZTY3NDI4YWVhYjE3NDMwZTFmNmE5YzcyYWYyODFmNDFmNjg0ZDBhKIyDNQ==: --dhchap-ctrl-secret DHHC-1:03:M2MzMzdkN2NhMzU0OWYxMjJhZjI5MWZhMjQzNjRjNTlhZTI5MjA3NzJhY2E5YjBkYWQ5OWFiODc2ZDVhNDAzMy2Hyy4=: 00:13:22.941 06:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 --hostid a979a798-a221-4879-b3c4-5aaa753fde06 -l 0 --dhchap-secret DHHC-1:00:MzRhZDI4NDFiZTY3NDI4YWVhYjE3NDMwZTFmNmE5YzcyYWYyODFmNDFmNjg0ZDBhKIyDNQ==: --dhchap-ctrl-secret DHHC-1:03:M2MzMzdkN2NhMzU0OWYxMjJhZjI5MWZhMjQzNjRjNTlhZTI5MjA3NzJhY2E5YjBkYWQ5OWFiODc2ZDVhNDAzMy2Hyy4=: 00:13:23.508 06:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:23.508 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:23.508 06:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 00:13:23.508 06:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.508 06:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:23.508 06:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.508 06:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:23.508 06:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:23.508 06:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:24.075 06:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:13:24.075 06:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:24.075 06:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:24.076 06:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:13:24.076 06:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:24.076 06:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:24.076 06:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:24.076 06:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.076 06:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:24.076 06:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.076 06:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:24.076 06:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:24.076 06:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:24.334 00:13:24.334 06:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:24.334 06:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:24.334 06:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:24.649 06:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:24.649 06:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:24.649 06:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.649 06:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:24.649 06:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.649 06:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:24.649 { 00:13:24.649 "cntlid": 123, 00:13:24.649 "qid": 0, 00:13:24.649 "state": "enabled", 00:13:24.649 "thread": "nvmf_tgt_poll_group_000", 00:13:24.649 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06", 00:13:24.649 "listen_address": { 00:13:24.649 "trtype": "TCP", 00:13:24.649 "adrfam": "IPv4", 00:13:24.649 "traddr": "10.0.0.3", 00:13:24.649 "trsvcid": "4420" 00:13:24.649 }, 00:13:24.649 "peer_address": { 00:13:24.649 "trtype": "TCP", 00:13:24.649 "adrfam": "IPv4", 00:13:24.649 "traddr": "10.0.0.1", 00:13:24.649 "trsvcid": "45700" 00:13:24.649 }, 00:13:24.649 "auth": { 00:13:24.649 "state": "completed", 00:13:24.649 "digest": "sha512", 00:13:24.649 "dhgroup": "ffdhe4096" 00:13:24.649 } 00:13:24.649 } 00:13:24.649 ]' 00:13:24.649 06:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:24.649 06:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:24.649 06:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:24.907 06:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:24.907 06:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:24.907 06:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:24.907 06:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:24.907 06:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:25.165 06:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MDQxNWY0NjIxNmZmYWNiODg3MWU4MWY0NDI1ZDEyNWMSLUvV: --dhchap-ctrl-secret DHHC-1:02:OTJiMzg0Y2FiZWJhMWQ0NmNjMzQ2ZDJhMmRmZGRhZTAzMjQ2ZmFhZWRhMDUxZjFhuFjN0Q==: 00:13:25.165 06:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 --hostid a979a798-a221-4879-b3c4-5aaa753fde06 -l 0 --dhchap-secret DHHC-1:01:MDQxNWY0NjIxNmZmYWNiODg3MWU4MWY0NDI1ZDEyNWMSLUvV: --dhchap-ctrl-secret DHHC-1:02:OTJiMzg0Y2FiZWJhMWQ0NmNjMzQ2ZDJhMmRmZGRhZTAzMjQ2ZmFhZWRhMDUxZjFhuFjN0Q==: 00:13:25.785 06:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:25.785 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:25.785 06:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 00:13:25.785 06:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:25.785 06:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:25.785 06:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:25.786 06:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:25.786 06:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:25.786 06:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:26.044 06:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:13:26.044 06:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:26.044 06:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:26.044 06:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:13:26.044 06:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:26.044 06:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:26.044 06:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:26.044 06:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.044 06:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:26.044 06:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.044 06:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:26.044 06:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:26.044 06:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:26.303 00:13:26.303 06:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:26.303 06:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:26.303 06:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:26.562 06:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:26.562 06:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:26.562 06:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:26.562 06:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:26.562 06:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:26.562 06:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:26.562 { 00:13:26.562 "cntlid": 125, 00:13:26.562 "qid": 0, 00:13:26.562 "state": "enabled", 00:13:26.562 "thread": "nvmf_tgt_poll_group_000", 00:13:26.562 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06", 00:13:26.562 "listen_address": { 00:13:26.562 "trtype": "TCP", 00:13:26.562 "adrfam": "IPv4", 00:13:26.562 "traddr": "10.0.0.3", 00:13:26.562 "trsvcid": "4420" 00:13:26.562 }, 00:13:26.562 "peer_address": { 00:13:26.562 "trtype": "TCP", 00:13:26.562 "adrfam": "IPv4", 00:13:26.562 "traddr": "10.0.0.1", 00:13:26.562 "trsvcid": "45722" 00:13:26.562 }, 00:13:26.562 "auth": { 00:13:26.562 "state": "completed", 00:13:26.562 "digest": "sha512", 00:13:26.562 "dhgroup": "ffdhe4096" 00:13:26.562 } 00:13:26.562 } 00:13:26.562 ]' 00:13:26.562 06:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:26.821 06:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:26.821 06:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:26.821 06:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:26.821 06:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:26.821 06:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:26.821 06:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:26.821 06:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:27.080 06:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ODNjNGFlNzJmMDc5Y2ViZDNhNGRhNTUyZTUyNzdhNmNiNjVjODY3Yzg0M2UxYjgwB9C67A==: --dhchap-ctrl-secret DHHC-1:01:MDBiN2FiMTE5ZGQ4MTViZjhhZDM3NDgxZDAwYzAyYjI0Vo/w: 00:13:27.080 06:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 --hostid a979a798-a221-4879-b3c4-5aaa753fde06 -l 0 --dhchap-secret DHHC-1:02:ODNjNGFlNzJmMDc5Y2ViZDNhNGRhNTUyZTUyNzdhNmNiNjVjODY3Yzg0M2UxYjgwB9C67A==: --dhchap-ctrl-secret DHHC-1:01:MDBiN2FiMTE5ZGQ4MTViZjhhZDM3NDgxZDAwYzAyYjI0Vo/w: 00:13:27.649 06:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:27.649 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:27.649 06:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 00:13:27.649 06:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.649 06:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:27.649 06:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.649 06:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:27.649 06:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:27.649 06:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:13:27.908 06:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:13:27.908 06:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:27.908 06:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:27.908 06:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:13:27.908 06:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:27.908 06:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:27.908 06:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 --dhchap-key key3 00:13:27.908 06:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.908 06:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:27.908 06:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.908 06:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:27.908 06:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:27.908 06:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:28.477 00:13:28.477 06:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:28.477 06:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:28.477 06:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:28.737 06:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:28.737 06:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:28.737 06:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.737 06:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:28.737 06:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.737 06:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:28.737 { 00:13:28.737 "cntlid": 127, 00:13:28.737 "qid": 0, 00:13:28.737 "state": "enabled", 00:13:28.737 "thread": "nvmf_tgt_poll_group_000", 00:13:28.737 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06", 00:13:28.737 "listen_address": { 00:13:28.737 "trtype": "TCP", 00:13:28.737 "adrfam": "IPv4", 00:13:28.737 "traddr": "10.0.0.3", 00:13:28.737 "trsvcid": "4420" 00:13:28.737 }, 00:13:28.737 "peer_address": { 00:13:28.737 "trtype": "TCP", 00:13:28.737 "adrfam": "IPv4", 00:13:28.737 "traddr": "10.0.0.1", 00:13:28.737 "trsvcid": "45746" 00:13:28.737 }, 00:13:28.737 "auth": { 00:13:28.737 "state": "completed", 00:13:28.737 "digest": "sha512", 00:13:28.737 "dhgroup": "ffdhe4096" 00:13:28.737 } 00:13:28.737 } 00:13:28.737 ]' 00:13:28.737 06:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:28.737 06:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:28.737 06:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:28.737 06:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:28.737 06:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:28.737 06:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:28.737 06:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:28.737 06:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:28.996 06:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjI4ZmRiM2QyMjVhY2RkMzZiYzllMmI0NDQ0OTFmZThiYmI1NmE0YjIxYmY2NmRmNWQyOWEwNzk5NWNiMjg0ZT7lCu4=: 00:13:28.996 06:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 --hostid a979a798-a221-4879-b3c4-5aaa753fde06 -l 0 --dhchap-secret DHHC-1:03:MjI4ZmRiM2QyMjVhY2RkMzZiYzllMmI0NDQ0OTFmZThiYmI1NmE0YjIxYmY2NmRmNWQyOWEwNzk5NWNiMjg0ZT7lCu4=: 00:13:29.932 06:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:29.932 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:29.932 06:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 00:13:29.933 06:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.933 06:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:29.933 06:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.933 06:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:29.933 06:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:29.933 06:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:29.933 06:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:30.192 06:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:13:30.192 06:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:30.192 06:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:30.192 06:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:13:30.192 06:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:30.192 06:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:30.192 06:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:30.192 06:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.192 06:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:30.192 06:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.192 06:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:30.192 06:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:30.192 06:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:30.761 00:13:30.761 06:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:30.761 06:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:30.761 06:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:30.761 06:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:30.761 06:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:30.761 06:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.761 06:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:30.761 06:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.761 06:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:30.761 { 00:13:30.761 "cntlid": 129, 00:13:30.761 "qid": 0, 00:13:30.761 "state": "enabled", 00:13:30.761 "thread": "nvmf_tgt_poll_group_000", 00:13:30.761 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06", 00:13:30.761 "listen_address": { 00:13:30.761 "trtype": "TCP", 00:13:30.761 "adrfam": "IPv4", 00:13:30.761 "traddr": "10.0.0.3", 00:13:30.761 "trsvcid": "4420" 00:13:30.761 }, 00:13:30.761 "peer_address": { 00:13:30.761 "trtype": "TCP", 00:13:30.761 "adrfam": "IPv4", 00:13:30.761 "traddr": "10.0.0.1", 00:13:30.761 "trsvcid": "45776" 00:13:30.761 }, 00:13:30.761 "auth": { 00:13:30.761 "state": "completed", 00:13:30.761 "digest": "sha512", 00:13:30.761 "dhgroup": "ffdhe6144" 00:13:30.761 } 00:13:30.761 } 00:13:30.761 ]' 00:13:30.761 06:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:31.020 06:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:31.020 06:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:31.020 06:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:31.020 06:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:31.020 06:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:31.020 06:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:31.020 06:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:31.280 06:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzRhZDI4NDFiZTY3NDI4YWVhYjE3NDMwZTFmNmE5YzcyYWYyODFmNDFmNjg0ZDBhKIyDNQ==: --dhchap-ctrl-secret DHHC-1:03:M2MzMzdkN2NhMzU0OWYxMjJhZjI5MWZhMjQzNjRjNTlhZTI5MjA3NzJhY2E5YjBkYWQ5OWFiODc2ZDVhNDAzMy2Hyy4=: 00:13:31.280 06:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 --hostid a979a798-a221-4879-b3c4-5aaa753fde06 -l 0 --dhchap-secret DHHC-1:00:MzRhZDI4NDFiZTY3NDI4YWVhYjE3NDMwZTFmNmE5YzcyYWYyODFmNDFmNjg0ZDBhKIyDNQ==: --dhchap-ctrl-secret DHHC-1:03:M2MzMzdkN2NhMzU0OWYxMjJhZjI5MWZhMjQzNjRjNTlhZTI5MjA3NzJhY2E5YjBkYWQ5OWFiODc2ZDVhNDAzMy2Hyy4=: 00:13:31.849 06:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:31.849 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:31.849 06:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 00:13:31.849 06:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.849 06:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:31.849 06:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.849 06:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:31.849 06:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:31.849 06:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:32.109 06:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:13:32.109 06:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:32.109 06:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:32.109 06:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:13:32.109 06:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:32.109 06:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:32.109 06:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:32.109 06:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.109 06:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:32.109 06:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.109 06:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:32.109 06:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:32.109 06:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:32.677 00:13:32.677 06:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:32.677 06:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:32.677 06:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:32.936 06:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:32.936 06:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:32.936 06:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.936 06:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:32.936 06:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.936 06:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:32.936 { 00:13:32.936 "cntlid": 131, 00:13:32.936 "qid": 0, 00:13:32.936 "state": "enabled", 00:13:32.936 "thread": "nvmf_tgt_poll_group_000", 00:13:32.936 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06", 00:13:32.936 "listen_address": { 00:13:32.936 "trtype": "TCP", 00:13:32.936 "adrfam": "IPv4", 00:13:32.936 "traddr": "10.0.0.3", 00:13:32.936 "trsvcid": "4420" 00:13:32.936 }, 00:13:32.936 "peer_address": { 00:13:32.936 "trtype": "TCP", 00:13:32.936 "adrfam": "IPv4", 00:13:32.936 "traddr": "10.0.0.1", 00:13:32.936 "trsvcid": "45802" 00:13:32.936 }, 00:13:32.936 "auth": { 00:13:32.936 "state": "completed", 00:13:32.936 "digest": "sha512", 00:13:32.936 "dhgroup": "ffdhe6144" 00:13:32.936 } 00:13:32.936 } 00:13:32.936 ]' 00:13:32.936 06:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:32.936 06:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:32.936 06:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:32.936 06:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:32.936 06:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:33.195 06:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:33.195 06:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:33.195 06:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:33.454 06:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MDQxNWY0NjIxNmZmYWNiODg3MWU4MWY0NDI1ZDEyNWMSLUvV: --dhchap-ctrl-secret DHHC-1:02:OTJiMzg0Y2FiZWJhMWQ0NmNjMzQ2ZDJhMmRmZGRhZTAzMjQ2ZmFhZWRhMDUxZjFhuFjN0Q==: 00:13:33.454 06:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 --hostid a979a798-a221-4879-b3c4-5aaa753fde06 -l 0 --dhchap-secret DHHC-1:01:MDQxNWY0NjIxNmZmYWNiODg3MWU4MWY0NDI1ZDEyNWMSLUvV: --dhchap-ctrl-secret DHHC-1:02:OTJiMzg0Y2FiZWJhMWQ0NmNjMzQ2ZDJhMmRmZGRhZTAzMjQ2ZmFhZWRhMDUxZjFhuFjN0Q==: 00:13:34.020 06:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:34.020 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:34.020 06:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 00:13:34.020 06:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.020 06:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:34.020 06:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.020 06:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:34.020 06:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:34.020 06:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:34.280 06:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:13:34.280 06:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:34.280 06:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:34.280 06:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:13:34.280 06:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:34.280 06:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:34.280 06:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:34.280 06:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.280 06:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:34.280 06:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.280 06:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:34.280 06:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:34.280 06:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:34.539 00:13:34.539 06:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:34.539 06:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:34.539 06:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:35.108 06:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:35.108 06:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:35.108 06:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.108 06:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:35.108 06:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.108 06:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:35.108 { 00:13:35.108 "cntlid": 133, 00:13:35.108 "qid": 0, 00:13:35.108 "state": "enabled", 00:13:35.108 "thread": "nvmf_tgt_poll_group_000", 00:13:35.108 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06", 00:13:35.108 "listen_address": { 00:13:35.108 "trtype": "TCP", 00:13:35.108 "adrfam": "IPv4", 00:13:35.108 "traddr": "10.0.0.3", 00:13:35.108 "trsvcid": "4420" 00:13:35.108 }, 00:13:35.108 "peer_address": { 00:13:35.108 "trtype": "TCP", 00:13:35.108 "adrfam": "IPv4", 00:13:35.108 "traddr": "10.0.0.1", 00:13:35.108 "trsvcid": "43474" 00:13:35.108 }, 00:13:35.108 "auth": { 00:13:35.108 "state": "completed", 00:13:35.108 "digest": "sha512", 00:13:35.108 "dhgroup": "ffdhe6144" 00:13:35.108 } 00:13:35.108 } 00:13:35.108 ]' 00:13:35.108 06:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:35.108 06:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:35.108 06:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:35.108 06:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:35.108 06:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:35.108 06:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:35.108 06:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:35.108 06:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:35.367 06:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ODNjNGFlNzJmMDc5Y2ViZDNhNGRhNTUyZTUyNzdhNmNiNjVjODY3Yzg0M2UxYjgwB9C67A==: --dhchap-ctrl-secret DHHC-1:01:MDBiN2FiMTE5ZGQ4MTViZjhhZDM3NDgxZDAwYzAyYjI0Vo/w: 00:13:35.367 06:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 --hostid a979a798-a221-4879-b3c4-5aaa753fde06 -l 0 --dhchap-secret DHHC-1:02:ODNjNGFlNzJmMDc5Y2ViZDNhNGRhNTUyZTUyNzdhNmNiNjVjODY3Yzg0M2UxYjgwB9C67A==: --dhchap-ctrl-secret DHHC-1:01:MDBiN2FiMTE5ZGQ4MTViZjhhZDM3NDgxZDAwYzAyYjI0Vo/w: 00:13:35.943 06:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:35.943 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:35.944 06:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 00:13:35.944 06:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.944 06:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:36.202 06:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.202 06:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:36.202 06:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:36.202 06:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:13:36.461 06:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:13:36.461 06:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:36.461 06:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:36.461 06:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:13:36.461 06:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:36.461 06:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:36.461 06:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 --dhchap-key key3 00:13:36.461 06:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.461 06:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:36.461 06:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.461 06:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:36.461 06:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:36.461 06:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:36.720 00:13:36.720 06:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:36.721 06:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:36.721 06:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:36.979 06:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:36.979 06:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:36.979 06:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.979 06:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:37.239 06:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.239 06:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:37.239 { 00:13:37.239 "cntlid": 135, 00:13:37.239 "qid": 0, 00:13:37.239 "state": "enabled", 00:13:37.239 "thread": "nvmf_tgt_poll_group_000", 00:13:37.239 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06", 00:13:37.239 "listen_address": { 00:13:37.239 "trtype": "TCP", 00:13:37.239 "adrfam": "IPv4", 00:13:37.239 "traddr": "10.0.0.3", 00:13:37.239 "trsvcid": "4420" 00:13:37.239 }, 00:13:37.239 "peer_address": { 00:13:37.239 "trtype": "TCP", 00:13:37.239 "adrfam": "IPv4", 00:13:37.239 "traddr": "10.0.0.1", 00:13:37.239 "trsvcid": "43514" 00:13:37.239 }, 00:13:37.239 "auth": { 00:13:37.239 "state": "completed", 00:13:37.239 "digest": "sha512", 00:13:37.239 "dhgroup": "ffdhe6144" 00:13:37.239 } 00:13:37.239 } 00:13:37.239 ]' 00:13:37.239 06:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:37.239 06:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:37.239 06:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:37.239 06:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:37.239 06:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:37.239 06:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:37.239 06:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:37.239 06:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:37.499 06:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjI4ZmRiM2QyMjVhY2RkMzZiYzllMmI0NDQ0OTFmZThiYmI1NmE0YjIxYmY2NmRmNWQyOWEwNzk5NWNiMjg0ZT7lCu4=: 00:13:37.499 06:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 --hostid a979a798-a221-4879-b3c4-5aaa753fde06 -l 0 --dhchap-secret DHHC-1:03:MjI4ZmRiM2QyMjVhY2RkMzZiYzllMmI0NDQ0OTFmZThiYmI1NmE0YjIxYmY2NmRmNWQyOWEwNzk5NWNiMjg0ZT7lCu4=: 00:13:38.438 06:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:38.438 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:38.438 06:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 00:13:38.438 06:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.438 06:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:38.438 06:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.438 06:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:38.438 06:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:38.438 06:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:38.438 06:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:38.697 06:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:13:38.697 06:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:38.697 06:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:38.697 06:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:13:38.697 06:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:38.697 06:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:38.697 06:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:38.697 06:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.697 06:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:38.697 06:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.697 06:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:38.697 06:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:38.697 06:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:39.265 00:13:39.265 06:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:39.265 06:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:39.265 06:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:39.525 06:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:39.525 06:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:39.525 06:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.525 06:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:39.525 06:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.525 06:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:39.525 { 00:13:39.525 "cntlid": 137, 00:13:39.525 "qid": 0, 00:13:39.525 "state": "enabled", 00:13:39.525 "thread": "nvmf_tgt_poll_group_000", 00:13:39.525 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06", 00:13:39.525 "listen_address": { 00:13:39.525 "trtype": "TCP", 00:13:39.525 "adrfam": "IPv4", 00:13:39.525 "traddr": "10.0.0.3", 00:13:39.525 "trsvcid": "4420" 00:13:39.525 }, 00:13:39.525 "peer_address": { 00:13:39.525 "trtype": "TCP", 00:13:39.525 "adrfam": "IPv4", 00:13:39.525 "traddr": "10.0.0.1", 00:13:39.525 "trsvcid": "43536" 00:13:39.525 }, 00:13:39.525 "auth": { 00:13:39.525 "state": "completed", 00:13:39.525 "digest": "sha512", 00:13:39.525 "dhgroup": "ffdhe8192" 00:13:39.525 } 00:13:39.525 } 00:13:39.525 ]' 00:13:39.525 06:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:39.525 06:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:39.525 06:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:39.784 06:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:39.784 06:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:39.784 06:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:39.784 06:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:39.784 06:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:40.043 06:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzRhZDI4NDFiZTY3NDI4YWVhYjE3NDMwZTFmNmE5YzcyYWYyODFmNDFmNjg0ZDBhKIyDNQ==: --dhchap-ctrl-secret DHHC-1:03:M2MzMzdkN2NhMzU0OWYxMjJhZjI5MWZhMjQzNjRjNTlhZTI5MjA3NzJhY2E5YjBkYWQ5OWFiODc2ZDVhNDAzMy2Hyy4=: 00:13:40.043 06:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 --hostid a979a798-a221-4879-b3c4-5aaa753fde06 -l 0 --dhchap-secret DHHC-1:00:MzRhZDI4NDFiZTY3NDI4YWVhYjE3NDMwZTFmNmE5YzcyYWYyODFmNDFmNjg0ZDBhKIyDNQ==: --dhchap-ctrl-secret DHHC-1:03:M2MzMzdkN2NhMzU0OWYxMjJhZjI5MWZhMjQzNjRjNTlhZTI5MjA3NzJhY2E5YjBkYWQ5OWFiODc2ZDVhNDAzMy2Hyy4=: 00:13:40.611 06:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:40.611 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:40.611 06:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 00:13:40.611 06:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.611 06:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:40.611 06:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.611 06:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:40.611 06:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:40.611 06:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:40.870 06:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:13:40.870 06:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:40.870 06:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:40.870 06:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:13:40.870 06:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:40.870 06:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:40.870 06:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:40.870 06:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.870 06:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:40.870 06:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.870 06:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:40.870 06:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:40.870 06:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:41.438 00:13:41.438 06:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:41.438 06:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:41.438 06:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:41.697 06:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:41.697 06:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:41.697 06:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.697 06:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:41.697 06:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.697 06:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:41.697 { 00:13:41.697 "cntlid": 139, 00:13:41.697 "qid": 0, 00:13:41.697 "state": "enabled", 00:13:41.697 "thread": "nvmf_tgt_poll_group_000", 00:13:41.697 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06", 00:13:41.697 "listen_address": { 00:13:41.697 "trtype": "TCP", 00:13:41.697 "adrfam": "IPv4", 00:13:41.697 "traddr": "10.0.0.3", 00:13:41.697 "trsvcid": "4420" 00:13:41.697 }, 00:13:41.697 "peer_address": { 00:13:41.697 "trtype": "TCP", 00:13:41.697 "adrfam": "IPv4", 00:13:41.697 "traddr": "10.0.0.1", 00:13:41.697 "trsvcid": "43556" 00:13:41.697 }, 00:13:41.697 "auth": { 00:13:41.697 "state": "completed", 00:13:41.697 "digest": "sha512", 00:13:41.697 "dhgroup": "ffdhe8192" 00:13:41.697 } 00:13:41.697 } 00:13:41.697 ]' 00:13:41.697 06:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:41.956 06:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:41.956 06:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:41.956 06:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:41.956 06:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:41.956 06:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:41.956 06:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:41.956 06:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:42.215 06:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MDQxNWY0NjIxNmZmYWNiODg3MWU4MWY0NDI1ZDEyNWMSLUvV: --dhchap-ctrl-secret DHHC-1:02:OTJiMzg0Y2FiZWJhMWQ0NmNjMzQ2ZDJhMmRmZGRhZTAzMjQ2ZmFhZWRhMDUxZjFhuFjN0Q==: 00:13:42.215 06:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 --hostid a979a798-a221-4879-b3c4-5aaa753fde06 -l 0 --dhchap-secret DHHC-1:01:MDQxNWY0NjIxNmZmYWNiODg3MWU4MWY0NDI1ZDEyNWMSLUvV: --dhchap-ctrl-secret DHHC-1:02:OTJiMzg0Y2FiZWJhMWQ0NmNjMzQ2ZDJhMmRmZGRhZTAzMjQ2ZmFhZWRhMDUxZjFhuFjN0Q==: 00:13:42.782 06:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:42.782 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:42.782 06:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 00:13:42.782 06:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.782 06:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:42.782 06:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.782 06:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:42.782 06:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:42.782 06:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:43.041 06:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:13:43.041 06:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:43.041 06:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:43.041 06:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:13:43.041 06:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:43.041 06:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:43.041 06:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:43.041 06:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:43.041 06:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:43.041 06:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.041 06:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:43.041 06:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:43.041 06:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:43.609 00:13:43.867 06:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:43.867 06:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:43.867 06:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:44.127 06:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:44.127 06:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:44.127 06:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.127 06:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:44.127 06:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.127 06:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:44.127 { 00:13:44.127 "cntlid": 141, 00:13:44.127 "qid": 0, 00:13:44.127 "state": "enabled", 00:13:44.127 "thread": "nvmf_tgt_poll_group_000", 00:13:44.127 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06", 00:13:44.127 "listen_address": { 00:13:44.127 "trtype": "TCP", 00:13:44.127 "adrfam": "IPv4", 00:13:44.127 "traddr": "10.0.0.3", 00:13:44.127 "trsvcid": "4420" 00:13:44.127 }, 00:13:44.127 "peer_address": { 00:13:44.127 "trtype": "TCP", 00:13:44.127 "adrfam": "IPv4", 00:13:44.127 "traddr": "10.0.0.1", 00:13:44.127 "trsvcid": "48592" 00:13:44.127 }, 00:13:44.127 "auth": { 00:13:44.127 "state": "completed", 00:13:44.127 "digest": "sha512", 00:13:44.127 "dhgroup": "ffdhe8192" 00:13:44.127 } 00:13:44.127 } 00:13:44.127 ]' 00:13:44.127 06:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:44.127 06:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:44.127 06:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:44.127 06:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:44.127 06:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:44.127 06:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:44.127 06:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:44.127 06:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:44.386 06:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ODNjNGFlNzJmMDc5Y2ViZDNhNGRhNTUyZTUyNzdhNmNiNjVjODY3Yzg0M2UxYjgwB9C67A==: --dhchap-ctrl-secret DHHC-1:01:MDBiN2FiMTE5ZGQ4MTViZjhhZDM3NDgxZDAwYzAyYjI0Vo/w: 00:13:44.386 06:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 --hostid a979a798-a221-4879-b3c4-5aaa753fde06 -l 0 --dhchap-secret DHHC-1:02:ODNjNGFlNzJmMDc5Y2ViZDNhNGRhNTUyZTUyNzdhNmNiNjVjODY3Yzg0M2UxYjgwB9C67A==: --dhchap-ctrl-secret DHHC-1:01:MDBiN2FiMTE5ZGQ4MTViZjhhZDM3NDgxZDAwYzAyYjI0Vo/w: 00:13:45.327 06:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:45.327 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:45.327 06:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 00:13:45.327 06:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.327 06:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:45.327 06:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.327 06:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:45.327 06:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:45.327 06:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:13:45.327 06:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:13:45.327 06:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:45.327 06:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:45.327 06:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:13:45.327 06:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:45.327 06:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:45.327 06:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 --dhchap-key key3 00:13:45.327 06:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.327 06:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:45.586 06:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.586 06:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:45.586 06:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:45.586 06:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:46.150 00:13:46.150 06:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:46.150 06:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:46.150 06:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:46.408 06:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:46.408 06:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:46.408 06:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.408 06:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:46.408 06:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.408 06:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:46.408 { 00:13:46.408 "cntlid": 143, 00:13:46.408 "qid": 0, 00:13:46.408 "state": "enabled", 00:13:46.408 "thread": "nvmf_tgt_poll_group_000", 00:13:46.408 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06", 00:13:46.408 "listen_address": { 00:13:46.408 "trtype": "TCP", 00:13:46.408 "adrfam": "IPv4", 00:13:46.408 "traddr": "10.0.0.3", 00:13:46.408 "trsvcid": "4420" 00:13:46.408 }, 00:13:46.408 "peer_address": { 00:13:46.408 "trtype": "TCP", 00:13:46.408 "adrfam": "IPv4", 00:13:46.408 "traddr": "10.0.0.1", 00:13:46.408 "trsvcid": "48612" 00:13:46.408 }, 00:13:46.408 "auth": { 00:13:46.408 "state": "completed", 00:13:46.408 "digest": "sha512", 00:13:46.408 "dhgroup": "ffdhe8192" 00:13:46.408 } 00:13:46.408 } 00:13:46.408 ]' 00:13:46.408 06:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:46.408 06:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:46.408 06:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:46.408 06:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:46.408 06:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:46.666 06:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:46.666 06:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:46.666 06:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:46.924 06:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjI4ZmRiM2QyMjVhY2RkMzZiYzllMmI0NDQ0OTFmZThiYmI1NmE0YjIxYmY2NmRmNWQyOWEwNzk5NWNiMjg0ZT7lCu4=: 00:13:46.924 06:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 --hostid a979a798-a221-4879-b3c4-5aaa753fde06 -l 0 --dhchap-secret DHHC-1:03:MjI4ZmRiM2QyMjVhY2RkMzZiYzllMmI0NDQ0OTFmZThiYmI1NmE0YjIxYmY2NmRmNWQyOWEwNzk5NWNiMjg0ZT7lCu4=: 00:13:47.491 06:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:47.491 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:47.491 06:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 00:13:47.491 06:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.491 06:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:47.491 06:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.491 06:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:13:47.491 06:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:13:47.491 06:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:13:47.491 06:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:13:47.491 06:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:13:47.491 06:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:13:47.750 06:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:13:47.750 06:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:47.750 06:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:47.750 06:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:13:47.750 06:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:47.750 06:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:47.750 06:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:47.750 06:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.750 06:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:47.750 06:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.750 06:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:47.750 06:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:47.750 06:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:48.317 00:13:48.317 06:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:48.317 06:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:48.317 06:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:48.575 06:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:48.575 06:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:48.575 06:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.575 06:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:48.575 06:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.575 06:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:48.575 { 00:13:48.575 "cntlid": 145, 00:13:48.575 "qid": 0, 00:13:48.575 "state": "enabled", 00:13:48.575 "thread": "nvmf_tgt_poll_group_000", 00:13:48.575 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06", 00:13:48.575 "listen_address": { 00:13:48.575 "trtype": "TCP", 00:13:48.575 "adrfam": "IPv4", 00:13:48.575 "traddr": "10.0.0.3", 00:13:48.575 "trsvcid": "4420" 00:13:48.575 }, 00:13:48.575 "peer_address": { 00:13:48.575 "trtype": "TCP", 00:13:48.575 "adrfam": "IPv4", 00:13:48.575 "traddr": "10.0.0.1", 00:13:48.575 "trsvcid": "48640" 00:13:48.575 }, 00:13:48.575 "auth": { 00:13:48.575 "state": "completed", 00:13:48.575 "digest": "sha512", 00:13:48.575 "dhgroup": "ffdhe8192" 00:13:48.575 } 00:13:48.575 } 00:13:48.575 ]' 00:13:48.575 06:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:48.834 06:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:48.834 06:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:48.834 06:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:48.834 06:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:48.834 06:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:48.834 06:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:48.834 06:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:49.092 06:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzRhZDI4NDFiZTY3NDI4YWVhYjE3NDMwZTFmNmE5YzcyYWYyODFmNDFmNjg0ZDBhKIyDNQ==: --dhchap-ctrl-secret DHHC-1:03:M2MzMzdkN2NhMzU0OWYxMjJhZjI5MWZhMjQzNjRjNTlhZTI5MjA3NzJhY2E5YjBkYWQ5OWFiODc2ZDVhNDAzMy2Hyy4=: 00:13:49.092 06:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 --hostid a979a798-a221-4879-b3c4-5aaa753fde06 -l 0 --dhchap-secret DHHC-1:00:MzRhZDI4NDFiZTY3NDI4YWVhYjE3NDMwZTFmNmE5YzcyYWYyODFmNDFmNjg0ZDBhKIyDNQ==: --dhchap-ctrl-secret DHHC-1:03:M2MzMzdkN2NhMzU0OWYxMjJhZjI5MWZhMjQzNjRjNTlhZTI5MjA3NzJhY2E5YjBkYWQ5OWFiODc2ZDVhNDAzMy2Hyy4=: 00:13:50.026 06:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:50.026 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:50.026 06:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 00:13:50.026 06:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.026 06:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:50.026 06:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.026 06:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 --dhchap-key key1 00:13:50.026 06:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.026 06:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:50.026 06:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.026 06:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:13:50.026 06:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:13:50.026 06:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:13:50.026 06:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:13:50.026 06:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:50.026 06:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:13:50.026 06:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:50.026 06:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key2 00:13:50.026 06:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:13:50.026 06:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:13:50.592 request: 00:13:50.592 { 00:13:50.592 "name": "nvme0", 00:13:50.592 "trtype": "tcp", 00:13:50.592 "traddr": "10.0.0.3", 00:13:50.592 "adrfam": "ipv4", 00:13:50.592 "trsvcid": "4420", 00:13:50.592 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:13:50.592 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06", 00:13:50.592 "prchk_reftag": false, 00:13:50.592 "prchk_guard": false, 00:13:50.592 "hdgst": false, 00:13:50.593 "ddgst": false, 00:13:50.593 "dhchap_key": "key2", 00:13:50.593 "allow_unrecognized_csi": false, 00:13:50.593 "method": "bdev_nvme_attach_controller", 00:13:50.593 "req_id": 1 00:13:50.593 } 00:13:50.593 Got JSON-RPC error response 00:13:50.593 response: 00:13:50.593 { 00:13:50.593 "code": -5, 00:13:50.593 "message": "Input/output error" 00:13:50.593 } 00:13:50.593 06:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:13:50.593 06:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:50.593 06:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:50.593 06:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:50.593 06:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 00:13:50.593 06:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.593 06:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:50.593 06:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.593 06:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:50.593 06:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:50.593 06:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:50.593 06:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:50.593 06:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:13:50.593 06:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:13:50.593 06:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:13:50.593 06:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:13:50.593 06:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:50.593 06:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:13:50.593 06:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:50.593 06:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:13:50.593 06:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:13:50.593 06:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:13:51.162 request: 00:13:51.162 { 00:13:51.162 "name": "nvme0", 00:13:51.162 "trtype": "tcp", 00:13:51.162 "traddr": "10.0.0.3", 00:13:51.162 "adrfam": "ipv4", 00:13:51.162 "trsvcid": "4420", 00:13:51.162 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:13:51.162 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06", 00:13:51.162 "prchk_reftag": false, 00:13:51.162 "prchk_guard": false, 00:13:51.162 "hdgst": false, 00:13:51.162 "ddgst": false, 00:13:51.162 "dhchap_key": "key1", 00:13:51.162 "dhchap_ctrlr_key": "ckey2", 00:13:51.162 "allow_unrecognized_csi": false, 00:13:51.162 "method": "bdev_nvme_attach_controller", 00:13:51.162 "req_id": 1 00:13:51.162 } 00:13:51.162 Got JSON-RPC error response 00:13:51.162 response: 00:13:51.162 { 00:13:51.162 "code": -5, 00:13:51.162 "message": "Input/output error" 00:13:51.162 } 00:13:51.162 06:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:13:51.162 06:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:51.162 06:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:51.162 06:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:51.162 06:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 00:13:51.162 06:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.162 06:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:51.162 06:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.162 06:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 --dhchap-key key1 00:13:51.162 06:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.162 06:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:51.162 06:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.162 06:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:51.162 06:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:13:51.162 06:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:51.162 06:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:13:51.162 06:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:51.162 06:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:13:51.162 06:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:51.162 06:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:51.162 06:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:51.162 06:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:51.730 request: 00:13:51.730 { 00:13:51.730 "name": "nvme0", 00:13:51.730 "trtype": "tcp", 00:13:51.730 "traddr": "10.0.0.3", 00:13:51.730 "adrfam": "ipv4", 00:13:51.730 "trsvcid": "4420", 00:13:51.730 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:13:51.730 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06", 00:13:51.730 "prchk_reftag": false, 00:13:51.730 "prchk_guard": false, 00:13:51.730 "hdgst": false, 00:13:51.730 "ddgst": false, 00:13:51.730 "dhchap_key": "key1", 00:13:51.730 "dhchap_ctrlr_key": "ckey1", 00:13:51.730 "allow_unrecognized_csi": false, 00:13:51.730 "method": "bdev_nvme_attach_controller", 00:13:51.730 "req_id": 1 00:13:51.730 } 00:13:51.730 Got JSON-RPC error response 00:13:51.730 response: 00:13:51.730 { 00:13:51.730 "code": -5, 00:13:51.730 "message": "Input/output error" 00:13:51.730 } 00:13:51.730 06:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:13:51.730 06:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:51.730 06:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:51.730 06:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:51.730 06:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 00:13:51.730 06:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.730 06:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:51.730 06:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.730 06:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 79087 00:13:51.730 06:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 79087 ']' 00:13:51.730 06:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 79087 00:13:51.730 06:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:13:51.730 06:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:51.731 06:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 79087 00:13:51.731 06:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:51.731 killing process with pid 79087 00:13:51.731 06:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:51.731 06:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 79087' 00:13:51.731 06:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 79087 00:13:51.731 06:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 79087 00:13:51.990 06:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:13:51.990 06:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:13:51.990 06:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:51.990 06:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:51.990 06:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@505 -- # nvmfpid=82120 00:13:51.990 06:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # waitforlisten 82120 00:13:51.990 06:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:13:51.990 06:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 82120 ']' 00:13:51.990 06:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:51.990 06:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:51.990 06:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:51.990 06:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:51.990 06:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:52.250 06:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:52.250 06:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:13:52.250 06:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:13:52.250 06:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:52.250 06:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:52.250 06:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:52.250 06:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:13:52.250 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:52.250 06:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 82120 00:13:52.250 06:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 82120 ']' 00:13:52.250 06:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:52.250 06:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:52.250 06:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:52.250 06:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:52.250 06:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:52.510 06:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:52.510 06:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:13:52.510 06:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:13:52.510 06:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.510 06:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:52.510 null0 00:13:52.770 06:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.770 06:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:13:52.770 06:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.2bP 00:13:52.770 06:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.770 06:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:52.770 06:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.770 06:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.dzY ]] 00:13:52.770 06:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.dzY 00:13:52.770 06:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.770 06:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:52.770 06:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.770 06:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:13:52.770 06:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.kdz 00:13:52.770 06:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.770 06:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:52.770 06:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.770 06:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.04y ]] 00:13:52.770 06:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.04y 00:13:52.770 06:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.770 06:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:52.770 06:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.770 06:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:13:52.770 06:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.AO0 00:13:52.770 06:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.770 06:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:52.770 06:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.770 06:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.EP3 ]] 00:13:52.770 06:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.EP3 00:13:52.770 06:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.770 06:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:52.770 06:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.770 06:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:13:52.770 06:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.sQC 00:13:52.770 06:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.770 06:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:52.770 06:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.770 06:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:13:52.770 06:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:13:52.770 06:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:52.770 06:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:13:52.770 06:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:13:52.770 06:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:52.771 06:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:52.771 06:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 --dhchap-key key3 00:13:52.771 06:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.771 06:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:52.771 06:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.771 06:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:52.771 06:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:52.771 06:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:53.706 nvme0n1 00:13:53.706 06:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:53.706 06:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:53.706 06:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:53.965 06:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:53.965 06:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:53.965 06:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.965 06:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:53.965 06:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.965 06:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:53.965 { 00:13:53.965 "cntlid": 1, 00:13:53.965 "qid": 0, 00:13:53.965 "state": "enabled", 00:13:53.965 "thread": "nvmf_tgt_poll_group_000", 00:13:53.965 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06", 00:13:53.965 "listen_address": { 00:13:53.965 "trtype": "TCP", 00:13:53.965 "adrfam": "IPv4", 00:13:53.965 "traddr": "10.0.0.3", 00:13:53.965 "trsvcid": "4420" 00:13:53.965 }, 00:13:53.965 "peer_address": { 00:13:53.965 "trtype": "TCP", 00:13:53.965 "adrfam": "IPv4", 00:13:53.965 "traddr": "10.0.0.1", 00:13:53.965 "trsvcid": "55002" 00:13:53.965 }, 00:13:53.965 "auth": { 00:13:53.965 "state": "completed", 00:13:53.965 "digest": "sha512", 00:13:53.965 "dhgroup": "ffdhe8192" 00:13:53.965 } 00:13:53.965 } 00:13:53.965 ]' 00:13:53.965 06:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:54.223 06:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:13:54.223 06:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:54.223 06:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:54.223 06:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:54.223 06:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:54.223 06:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:54.223 06:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:54.487 06:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjI4ZmRiM2QyMjVhY2RkMzZiYzllMmI0NDQ0OTFmZThiYmI1NmE0YjIxYmY2NmRmNWQyOWEwNzk5NWNiMjg0ZT7lCu4=: 00:13:54.487 06:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 --hostid a979a798-a221-4879-b3c4-5aaa753fde06 -l 0 --dhchap-secret DHHC-1:03:MjI4ZmRiM2QyMjVhY2RkMzZiYzllMmI0NDQ0OTFmZThiYmI1NmE0YjIxYmY2NmRmNWQyOWEwNzk5NWNiMjg0ZT7lCu4=: 00:13:55.429 06:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:55.429 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:55.429 06:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 00:13:55.429 06:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.429 06:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:55.429 06:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.429 06:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 --dhchap-key key3 00:13:55.429 06:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.429 06:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:55.429 06:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.429 06:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:13:55.429 06:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:13:55.688 06:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:13:55.688 06:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:13:55.688 06:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:13:55.688 06:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:13:55.688 06:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:55.688 06:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:13:55.688 06:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:55.688 06:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:55.688 06:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:55.688 06:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:55.946 request: 00:13:55.946 { 00:13:55.946 "name": "nvme0", 00:13:55.946 "trtype": "tcp", 00:13:55.946 "traddr": "10.0.0.3", 00:13:55.946 "adrfam": "ipv4", 00:13:55.946 "trsvcid": "4420", 00:13:55.946 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:13:55.946 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06", 00:13:55.946 "prchk_reftag": false, 00:13:55.946 "prchk_guard": false, 00:13:55.946 "hdgst": false, 00:13:55.946 "ddgst": false, 00:13:55.946 "dhchap_key": "key3", 00:13:55.946 "allow_unrecognized_csi": false, 00:13:55.946 "method": "bdev_nvme_attach_controller", 00:13:55.946 "req_id": 1 00:13:55.946 } 00:13:55.946 Got JSON-RPC error response 00:13:55.946 response: 00:13:55.946 { 00:13:55.946 "code": -5, 00:13:55.946 "message": "Input/output error" 00:13:55.946 } 00:13:55.946 06:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:13:55.946 06:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:55.946 06:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:55.946 06:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:55.946 06:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:13:55.946 06:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:13:55.946 06:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:13:55.946 06:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:13:56.205 06:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:13:56.205 06:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:13:56.205 06:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:13:56.205 06:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:13:56.205 06:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:56.205 06:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:13:56.205 06:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:56.205 06:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:56.205 06:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:56.205 06:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:56.772 request: 00:13:56.772 { 00:13:56.772 "name": "nvme0", 00:13:56.772 "trtype": "tcp", 00:13:56.772 "traddr": "10.0.0.3", 00:13:56.772 "adrfam": "ipv4", 00:13:56.772 "trsvcid": "4420", 00:13:56.772 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:13:56.772 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06", 00:13:56.772 "prchk_reftag": false, 00:13:56.772 "prchk_guard": false, 00:13:56.772 "hdgst": false, 00:13:56.772 "ddgst": false, 00:13:56.772 "dhchap_key": "key3", 00:13:56.772 "allow_unrecognized_csi": false, 00:13:56.772 "method": "bdev_nvme_attach_controller", 00:13:56.772 "req_id": 1 00:13:56.772 } 00:13:56.772 Got JSON-RPC error response 00:13:56.772 response: 00:13:56.772 { 00:13:56.772 "code": -5, 00:13:56.772 "message": "Input/output error" 00:13:56.772 } 00:13:56.772 06:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:13:56.772 06:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:56.772 06:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:56.772 06:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:56.772 06:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:13:56.772 06:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:13:56.772 06:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:13:56.772 06:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:13:56.772 06:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:13:56.772 06:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:13:57.031 06:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 00:13:57.031 06:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.031 06:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:57.031 06:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.031 06:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 00:13:57.031 06:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.031 06:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:57.031 06:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.031 06:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:13:57.031 06:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:13:57.031 06:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:13:57.031 06:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:13:57.031 06:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:57.031 06:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:13:57.031 06:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:57.031 06:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:13:57.031 06:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:13:57.031 06:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:13:57.600 request: 00:13:57.600 { 00:13:57.600 "name": "nvme0", 00:13:57.600 "trtype": "tcp", 00:13:57.600 "traddr": "10.0.0.3", 00:13:57.600 "adrfam": "ipv4", 00:13:57.600 "trsvcid": "4420", 00:13:57.600 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:13:57.600 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06", 00:13:57.600 "prchk_reftag": false, 00:13:57.600 "prchk_guard": false, 00:13:57.600 "hdgst": false, 00:13:57.600 "ddgst": false, 00:13:57.600 "dhchap_key": "key0", 00:13:57.600 "dhchap_ctrlr_key": "key1", 00:13:57.600 "allow_unrecognized_csi": false, 00:13:57.600 "method": "bdev_nvme_attach_controller", 00:13:57.600 "req_id": 1 00:13:57.600 } 00:13:57.600 Got JSON-RPC error response 00:13:57.600 response: 00:13:57.600 { 00:13:57.600 "code": -5, 00:13:57.600 "message": "Input/output error" 00:13:57.600 } 00:13:57.600 06:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:13:57.600 06:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:57.600 06:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:57.600 06:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:57.600 06:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:13:57.600 06:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:13:57.600 06:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:13:57.859 nvme0n1 00:13:57.859 06:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:13:57.859 06:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:13:57.859 06:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:58.428 06:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:58.428 06:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:58.428 06:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:58.687 06:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 --dhchap-key key1 00:13:58.687 06:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.687 06:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:58.687 06:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.687 06:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:13:58.687 06:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:13:58.687 06:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:13:59.623 nvme0n1 00:13:59.623 06:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:13:59.623 06:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:13:59.623 06:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:59.882 06:06:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:59.882 06:06:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 --dhchap-key key2 --dhchap-ctrlr-key key3 00:13:59.882 06:06:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.882 06:06:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:59.883 06:06:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.883 06:06:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:13:59.883 06:06:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:59.883 06:06:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:14:00.141 06:06:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:00.141 06:06:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:ODNjNGFlNzJmMDc5Y2ViZDNhNGRhNTUyZTUyNzdhNmNiNjVjODY3Yzg0M2UxYjgwB9C67A==: --dhchap-ctrl-secret DHHC-1:03:MjI4ZmRiM2QyMjVhY2RkMzZiYzllMmI0NDQ0OTFmZThiYmI1NmE0YjIxYmY2NmRmNWQyOWEwNzk5NWNiMjg0ZT7lCu4=: 00:14:00.141 06:06:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 --hostid a979a798-a221-4879-b3c4-5aaa753fde06 -l 0 --dhchap-secret DHHC-1:02:ODNjNGFlNzJmMDc5Y2ViZDNhNGRhNTUyZTUyNzdhNmNiNjVjODY3Yzg0M2UxYjgwB9C67A==: --dhchap-ctrl-secret DHHC-1:03:MjI4ZmRiM2QyMjVhY2RkMzZiYzllMmI0NDQ0OTFmZThiYmI1NmE0YjIxYmY2NmRmNWQyOWEwNzk5NWNiMjg0ZT7lCu4=: 00:14:00.710 06:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:14:00.710 06:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:14:00.710 06:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:14:00.710 06:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:14:00.710 06:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:14:00.710 06:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:14:00.710 06:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:14:00.710 06:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:00.710 06:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:00.970 06:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:14:00.970 06:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:14:00.970 06:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:14:00.970 06:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:14:00.970 06:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:00.970 06:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:14:00.970 06:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:00.970 06:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 00:14:00.970 06:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:14:00.970 06:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:14:01.539 request: 00:14:01.539 { 00:14:01.539 "name": "nvme0", 00:14:01.539 "trtype": "tcp", 00:14:01.539 "traddr": "10.0.0.3", 00:14:01.539 "adrfam": "ipv4", 00:14:01.539 "trsvcid": "4420", 00:14:01.539 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:14:01.539 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06", 00:14:01.539 "prchk_reftag": false, 00:14:01.539 "prchk_guard": false, 00:14:01.539 "hdgst": false, 00:14:01.539 "ddgst": false, 00:14:01.539 "dhchap_key": "key1", 00:14:01.539 "allow_unrecognized_csi": false, 00:14:01.539 "method": "bdev_nvme_attach_controller", 00:14:01.539 "req_id": 1 00:14:01.539 } 00:14:01.539 Got JSON-RPC error response 00:14:01.539 response: 00:14:01.539 { 00:14:01.539 "code": -5, 00:14:01.539 "message": "Input/output error" 00:14:01.539 } 00:14:01.539 06:06:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:14:01.539 06:06:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:01.539 06:06:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:01.539 06:06:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:01.539 06:06:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:14:01.539 06:06:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:14:01.539 06:06:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:14:02.918 nvme0n1 00:14:02.918 06:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:14:02.918 06:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:14:02.918 06:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:02.918 06:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:02.918 06:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:02.918 06:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:03.178 06:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 00:14:03.178 06:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.178 06:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:03.178 06:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.178 06:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:14:03.178 06:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:14:03.178 06:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:14:03.746 nvme0n1 00:14:03.746 06:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:14:03.746 06:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:03.746 06:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:14:04.005 06:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:04.005 06:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:04.005 06:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:04.266 06:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 --dhchap-key key1 --dhchap-ctrlr-key key3 00:14:04.266 06:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.266 06:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:04.266 06:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.266 06:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:MDQxNWY0NjIxNmZmYWNiODg3MWU4MWY0NDI1ZDEyNWMSLUvV: '' 2s 00:14:04.266 06:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:14:04.266 06:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:14:04.266 06:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:MDQxNWY0NjIxNmZmYWNiODg3MWU4MWY0NDI1ZDEyNWMSLUvV: 00:14:04.266 06:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:14:04.266 06:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:14:04.266 06:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:14:04.266 06:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:MDQxNWY0NjIxNmZmYWNiODg3MWU4MWY0NDI1ZDEyNWMSLUvV: ]] 00:14:04.266 06:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:MDQxNWY0NjIxNmZmYWNiODg3MWU4MWY0NDI1ZDEyNWMSLUvV: 00:14:04.266 06:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:14:04.266 06:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:14:04.266 06:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:14:06.206 06:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:14:06.206 06:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1235 -- # local i=0 00:14:06.206 06:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # grep -q -w nvme0n1 00:14:06.206 06:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # lsblk -l -o NAME 00:14:06.206 06:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # lsblk -l -o NAME 00:14:06.206 06:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # grep -q -w nvme0n1 00:14:06.206 06:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # return 0 00:14:06.206 06:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 --dhchap-key key1 --dhchap-ctrlr-key key2 00:14:06.206 06:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.206 06:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:06.206 06:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.206 06:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:ODNjNGFlNzJmMDc5Y2ViZDNhNGRhNTUyZTUyNzdhNmNiNjVjODY3Yzg0M2UxYjgwB9C67A==: 2s 00:14:06.206 06:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:14:06.206 06:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:14:06.206 06:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:14:06.206 06:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:ODNjNGFlNzJmMDc5Y2ViZDNhNGRhNTUyZTUyNzdhNmNiNjVjODY3Yzg0M2UxYjgwB9C67A==: 00:14:06.206 06:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:14:06.206 06:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:14:06.206 06:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:14:06.206 06:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:ODNjNGFlNzJmMDc5Y2ViZDNhNGRhNTUyZTUyNzdhNmNiNjVjODY3Yzg0M2UxYjgwB9C67A==: ]] 00:14:06.206 06:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:ODNjNGFlNzJmMDc5Y2ViZDNhNGRhNTUyZTUyNzdhNmNiNjVjODY3Yzg0M2UxYjgwB9C67A==: 00:14:06.206 06:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:14:06.206 06:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:14:08.741 06:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:14:08.741 06:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1235 -- # local i=0 00:14:08.741 06:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # lsblk -l -o NAME 00:14:08.741 06:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1236 -- # grep -q -w nvme0n1 00:14:08.741 06:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # lsblk -l -o NAME 00:14:08.741 06:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1242 -- # grep -q -w nvme0n1 00:14:08.741 06:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # return 0 00:14:08.741 06:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:08.741 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:08.741 06:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 --dhchap-key key0 --dhchap-ctrlr-key key1 00:14:08.741 06:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.741 06:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:08.741 06:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.741 06:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:14:08.741 06:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:14:08.741 06:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:14:09.308 nvme0n1 00:14:09.308 06:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 --dhchap-key key2 --dhchap-ctrlr-key key3 00:14:09.308 06:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:09.308 06:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:09.308 06:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:09.308 06:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:14:09.308 06:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:14:10.262 06:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:14:10.262 06:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:14:10.262 06:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:10.262 06:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:10.262 06:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 00:14:10.262 06:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.262 06:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:10.262 06:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.262 06:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:14:10.262 06:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:14:10.830 06:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:14:10.830 06:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:14:10.830 06:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:10.830 06:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:10.830 06:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 --dhchap-key key2 --dhchap-ctrlr-key key3 00:14:10.830 06:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.830 06:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:10.830 06:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.830 06:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:14:10.830 06:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:14:10.830 06:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:14:10.830 06:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:14:10.830 06:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:10.830 06:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:14:10.830 06:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:10.830 06:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:14:10.830 06:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:14:11.397 request: 00:14:11.397 { 00:14:11.397 "name": "nvme0", 00:14:11.397 "dhchap_key": "key1", 00:14:11.397 "dhchap_ctrlr_key": "key3", 00:14:11.397 "method": "bdev_nvme_set_keys", 00:14:11.397 "req_id": 1 00:14:11.397 } 00:14:11.397 Got JSON-RPC error response 00:14:11.397 response: 00:14:11.397 { 00:14:11.397 "code": -13, 00:14:11.397 "message": "Permission denied" 00:14:11.397 } 00:14:11.397 06:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:14:11.398 06:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:11.398 06:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:11.398 06:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:11.398 06:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:14:11.398 06:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:14:11.398 06:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:11.964 06:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:14:11.964 06:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:14:12.901 06:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:14:12.901 06:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:12.901 06:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:14:13.160 06:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:14:13.160 06:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 --dhchap-key key0 --dhchap-ctrlr-key key1 00:14:13.160 06:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:13.160 06:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:13.160 06:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:13.160 06:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:14:13.160 06:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:14:13.160 06:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:14:14.096 nvme0n1 00:14:14.096 06:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 --dhchap-key key2 --dhchap-ctrlr-key key3 00:14:14.096 06:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.096 06:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:14.096 06:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.096 06:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:14:14.096 06:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:14:14.096 06:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:14:14.096 06:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:14:14.096 06:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:14.096 06:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:14:14.096 06:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:14.096 06:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:14:14.096 06:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:14:14.663 request: 00:14:14.663 { 00:14:14.663 "name": "nvme0", 00:14:14.663 "dhchap_key": "key2", 00:14:14.663 "dhchap_ctrlr_key": "key0", 00:14:14.663 "method": "bdev_nvme_set_keys", 00:14:14.663 "req_id": 1 00:14:14.663 } 00:14:14.663 Got JSON-RPC error response 00:14:14.663 response: 00:14:14.663 { 00:14:14.663 "code": -13, 00:14:14.663 "message": "Permission denied" 00:14:14.663 } 00:14:14.663 06:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:14:14.663 06:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:14.663 06:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:14.663 06:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:14.663 06:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:14:14.663 06:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:14.663 06:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:14:15.230 06:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:14:15.230 06:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:14:16.167 06:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:14:16.167 06:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:16.167 06:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:14:16.441 06:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:14:16.442 06:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:14:16.442 06:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:14:16.442 06:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 79117 00:14:16.442 06:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 79117 ']' 00:14:16.442 06:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 79117 00:14:16.442 06:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:14:16.442 06:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:16.442 06:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 79117 00:14:16.442 06:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:14:16.442 06:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:14:16.442 killing process with pid 79117 00:14:16.442 06:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 79117' 00:14:16.442 06:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 79117 00:14:16.442 06:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 79117 00:14:16.708 06:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:14:16.708 06:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # nvmfcleanup 00:14:16.709 06:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:14:16.709 06:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:16.709 06:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:14:16.709 06:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:16.709 06:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:16.709 rmmod nvme_tcp 00:14:16.709 rmmod nvme_fabrics 00:14:16.709 rmmod nvme_keyring 00:14:16.709 06:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:16.709 06:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:14:16.709 06:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:14:16.709 06:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@513 -- # '[' -n 82120 ']' 00:14:16.709 06:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@514 -- # killprocess 82120 00:14:16.709 06:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 82120 ']' 00:14:16.709 06:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 82120 00:14:16.709 06:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:14:16.709 06:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:16.709 06:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 82120 00:14:16.709 06:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:16.709 06:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:16.709 killing process with pid 82120 00:14:16.709 06:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 82120' 00:14:16.709 06:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 82120 00:14:16.709 06:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 82120 00:14:16.968 06:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:14:16.968 06:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:14:16.968 06:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:14:16.968 06:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:14:16.968 06:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@787 -- # iptables-save 00:14:16.968 06:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@787 -- # iptables-restore 00:14:16.968 06:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:14:16.969 06:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:16.969 06:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:14:16.969 06:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:14:16.969 06:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:14:16.969 06:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:14:16.969 06:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:14:16.969 06:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:14:16.969 06:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:14:16.969 06:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:14:16.969 06:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:14:16.969 06:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:14:16.969 06:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:14:16.969 06:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:14:16.969 06:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:17.228 06:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:17.228 06:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:14:17.228 06:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:17.228 06:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:17.228 06:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:17.228 06:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@300 -- # return 0 00:14:17.228 06:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.2bP /tmp/spdk.key-sha256.kdz /tmp/spdk.key-sha384.AO0 /tmp/spdk.key-sha512.sQC /tmp/spdk.key-sha512.dzY /tmp/spdk.key-sha384.04y /tmp/spdk.key-sha256.EP3 '' /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log /home/vagrant/spdk_repo/spdk/../output/nvmf-auth.log 00:14:17.228 ************************************ 00:14:17.228 END TEST nvmf_auth_target 00:14:17.228 ************************************ 00:14:17.228 00:14:17.228 real 3m6.313s 00:14:17.228 user 7m28.357s 00:14:17.228 sys 0m28.615s 00:14:17.228 06:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:17.228 06:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:17.228 06:06:42 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:14:17.228 06:06:42 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:14:17.228 06:06:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:14:17.228 06:06:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:17.228 06:06:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:17.228 ************************************ 00:14:17.228 START TEST nvmf_bdevio_no_huge 00:14:17.228 ************************************ 00:14:17.228 06:06:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:14:17.228 * Looking for test storage... 00:14:17.228 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:17.228 06:06:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:14:17.228 06:06:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:14:17.228 06:06:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1681 -- # lcov --version 00:14:17.488 06:06:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:14:17.488 06:06:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:17.488 06:06:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:17.488 06:06:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:17.488 06:06:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:14:17.488 06:06:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:14:17.488 06:06:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:14:17.488 06:06:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:14:17.488 06:06:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:14:17.488 06:06:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:14:17.488 06:06:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:14:17.488 06:06:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:17.488 06:06:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:14:17.488 06:06:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:14:17.488 06:06:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:17.488 06:06:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:17.488 06:06:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:14:17.488 06:06:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:14:17.488 06:06:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:17.488 06:06:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:14:17.488 06:06:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:14:17.488 06:06:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:14:17.488 06:06:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:14:17.488 06:06:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:17.488 06:06:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:14:17.488 06:06:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:14:17.488 06:06:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:17.488 06:06:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:17.488 06:06:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:14:17.488 06:06:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:17.488 06:06:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:14:17.488 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:17.488 --rc genhtml_branch_coverage=1 00:14:17.488 --rc genhtml_function_coverage=1 00:14:17.488 --rc genhtml_legend=1 00:14:17.488 --rc geninfo_all_blocks=1 00:14:17.488 --rc geninfo_unexecuted_blocks=1 00:14:17.488 00:14:17.488 ' 00:14:17.488 06:06:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:14:17.488 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:17.488 --rc genhtml_branch_coverage=1 00:14:17.488 --rc genhtml_function_coverage=1 00:14:17.488 --rc genhtml_legend=1 00:14:17.488 --rc geninfo_all_blocks=1 00:14:17.488 --rc geninfo_unexecuted_blocks=1 00:14:17.488 00:14:17.488 ' 00:14:17.488 06:06:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:14:17.488 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:17.488 --rc genhtml_branch_coverage=1 00:14:17.488 --rc genhtml_function_coverage=1 00:14:17.488 --rc genhtml_legend=1 00:14:17.488 --rc geninfo_all_blocks=1 00:14:17.488 --rc geninfo_unexecuted_blocks=1 00:14:17.488 00:14:17.488 ' 00:14:17.488 06:06:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:14:17.488 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:17.488 --rc genhtml_branch_coverage=1 00:14:17.488 --rc genhtml_function_coverage=1 00:14:17.488 --rc genhtml_legend=1 00:14:17.488 --rc geninfo_all_blocks=1 00:14:17.488 --rc geninfo_unexecuted_blocks=1 00:14:17.488 00:14:17.488 ' 00:14:17.488 06:06:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:17.488 06:06:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:14:17.488 06:06:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:17.488 06:06:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:17.488 06:06:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:17.488 06:06:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:17.488 06:06:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:17.488 06:06:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:17.488 06:06:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:17.488 06:06:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:17.488 06:06:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:17.488 06:06:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:17.488 06:06:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 00:14:17.488 06:06:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=a979a798-a221-4879-b3c4-5aaa753fde06 00:14:17.488 06:06:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:17.488 06:06:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:17.488 06:06:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:17.488 06:06:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:17.488 06:06:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:17.488 06:06:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:14:17.488 06:06:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:17.488 06:06:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:17.488 06:06:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:17.488 06:06:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:17.488 06:06:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:17.488 06:06:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:17.488 06:06:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:14:17.488 06:06:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:17.488 06:06:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:14:17.488 06:06:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:17.488 06:06:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:17.488 06:06:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:17.488 06:06:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:17.488 06:06:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:17.488 06:06:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:17.488 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:17.488 06:06:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:17.488 06:06:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:17.489 06:06:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:17.489 06:06:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:17.489 06:06:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:17.489 06:06:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:14:17.489 06:06:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:14:17.489 06:06:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:17.489 06:06:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@472 -- # prepare_net_devs 00:14:17.489 06:06:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@434 -- # local -g is_hw=no 00:14:17.489 06:06:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@436 -- # remove_spdk_ns 00:14:17.489 06:06:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:17.489 06:06:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:17.489 06:06:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:17.489 06:06:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:14:17.489 06:06:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:14:17.489 06:06:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:14:17.489 06:06:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:14:17.489 06:06:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:14:17.489 06:06:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@456 -- # nvmf_veth_init 00:14:17.489 06:06:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:17.489 06:06:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:14:17.489 06:06:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:14:17.489 06:06:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:14:17.489 06:06:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:17.489 06:06:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:14:17.489 06:06:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:17.489 06:06:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:14:17.489 06:06:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:17.489 06:06:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:14:17.489 06:06:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:17.489 06:06:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:17.489 06:06:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:17.489 06:06:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:17.489 06:06:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:17.489 06:06:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:17.489 06:06:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:14:17.489 Cannot find device "nvmf_init_br" 00:14:17.489 06:06:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # true 00:14:17.489 06:06:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:14:17.489 Cannot find device "nvmf_init_br2" 00:14:17.489 06:06:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # true 00:14:17.489 06:06:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:14:17.489 Cannot find device "nvmf_tgt_br" 00:14:17.489 06:06:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@164 -- # true 00:14:17.489 06:06:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:14:17.489 Cannot find device "nvmf_tgt_br2" 00:14:17.489 06:06:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@165 -- # true 00:14:17.489 06:06:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:14:17.489 Cannot find device "nvmf_init_br" 00:14:17.489 06:06:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # true 00:14:17.489 06:06:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:14:17.489 Cannot find device "nvmf_init_br2" 00:14:17.489 06:06:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@167 -- # true 00:14:17.489 06:06:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:14:17.489 Cannot find device "nvmf_tgt_br" 00:14:17.489 06:06:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@168 -- # true 00:14:17.489 06:06:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:14:17.489 Cannot find device "nvmf_tgt_br2" 00:14:17.489 06:06:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # true 00:14:17.489 06:06:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:14:17.489 Cannot find device "nvmf_br" 00:14:17.489 06:06:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # true 00:14:17.489 06:06:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:14:17.489 Cannot find device "nvmf_init_if" 00:14:17.489 06:06:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # true 00:14:17.489 06:06:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:14:17.489 Cannot find device "nvmf_init_if2" 00:14:17.489 06:06:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@172 -- # true 00:14:17.489 06:06:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:17.489 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:17.489 06:06:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@173 -- # true 00:14:17.489 06:06:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:17.489 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:17.489 06:06:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # true 00:14:17.489 06:06:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:14:17.489 06:06:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:17.489 06:06:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:14:17.489 06:06:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:17.489 06:06:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:17.748 06:06:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:17.748 06:06:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:17.748 06:06:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:17.748 06:06:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:14:17.748 06:06:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:14:17.748 06:06:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:14:17.748 06:06:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:14:17.748 06:06:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:14:17.748 06:06:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:14:17.748 06:06:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:14:17.748 06:06:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:14:17.748 06:06:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:14:17.748 06:06:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:17.748 06:06:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:17.748 06:06:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:17.748 06:06:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:14:17.748 06:06:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:14:17.748 06:06:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:14:17.748 06:06:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:14:17.748 06:06:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:17.748 06:06:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:17.748 06:06:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:17.748 06:06:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:14:17.748 06:06:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:14:17.748 06:06:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:14:17.748 06:06:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:17.748 06:06:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:14:17.748 06:06:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:14:17.748 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:17.748 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.091 ms 00:14:17.748 00:14:17.748 --- 10.0.0.3 ping statistics --- 00:14:17.748 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:17.748 rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms 00:14:17.748 06:06:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:14:17.748 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:14:17.748 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.049 ms 00:14:17.748 00:14:17.748 --- 10.0.0.4 ping statistics --- 00:14:17.748 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:17.748 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:14:17.748 06:06:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:17.748 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:17.748 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:14:17.748 00:14:17.748 --- 10.0.0.1 ping statistics --- 00:14:17.748 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:17.748 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:14:17.748 06:06:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:14:17.748 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:17.748 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.056 ms 00:14:17.748 00:14:17.748 --- 10.0.0.2 ping statistics --- 00:14:17.748 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:17.748 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:14:17.748 06:06:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:17.748 06:06:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@457 -- # return 0 00:14:17.748 06:06:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:14:17.748 06:06:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:17.748 06:06:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:14:17.748 06:06:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:14:17.748 06:06:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:17.748 06:06:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:14:17.748 06:06:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:14:18.007 06:06:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:14:18.007 06:06:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:14:18.007 06:06:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:18.007 06:06:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:18.007 06:06:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:14:18.007 06:06:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@505 -- # nvmfpid=82770 00:14:18.007 06:06:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@506 -- # waitforlisten 82770 00:14:18.007 06:06:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@831 -- # '[' -z 82770 ']' 00:14:18.007 06:06:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:18.007 06:06:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:18.007 06:06:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:18.007 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:18.007 06:06:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:18.007 06:06:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:18.007 [2024-10-01 06:06:43.434243] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:14:18.007 [2024-10-01 06:06:43.434368] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:14:18.007 [2024-10-01 06:06:43.580097] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:18.266 [2024-10-01 06:06:43.657012] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:18.266 [2024-10-01 06:06:43.657447] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:18.266 [2024-10-01 06:06:43.657805] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:18.266 [2024-10-01 06:06:43.658385] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:18.266 [2024-10-01 06:06:43.658583] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:18.266 [2024-10-01 06:06:43.658982] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:14:18.266 [2024-10-01 06:06:43.659100] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 5 00:14:18.266 [2024-10-01 06:06:43.659138] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 6 00:14:18.266 [2024-10-01 06:06:43.659140] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:14:18.266 [2024-10-01 06:06:43.664375] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:18.266 06:06:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:18.266 06:06:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # return 0 00:14:18.266 06:06:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:14:18.266 06:06:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:18.266 06:06:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:18.266 06:06:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:18.266 06:06:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:18.266 06:06:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.266 06:06:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:18.266 [2024-10-01 06:06:43.822878] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:18.266 06:06:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.266 06:06:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:18.266 06:06:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.266 06:06:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:18.266 Malloc0 00:14:18.266 06:06:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.266 06:06:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:18.266 06:06:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.266 06:06:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:18.266 06:06:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.266 06:06:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:18.266 06:06:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.266 06:06:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:18.266 06:06:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.266 06:06:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:14:18.266 06:06:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.266 06:06:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:18.266 [2024-10-01 06:06:43.867088] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:18.266 06:06:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.266 06:06:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:14:18.266 06:06:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:14:18.266 06:06:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # config=() 00:14:18.266 06:06:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # local subsystem config 00:14:18.266 06:06:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:14:18.266 06:06:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:14:18.266 { 00:14:18.266 "params": { 00:14:18.266 "name": "Nvme$subsystem", 00:14:18.266 "trtype": "$TEST_TRANSPORT", 00:14:18.266 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:18.266 "adrfam": "ipv4", 00:14:18.266 "trsvcid": "$NVMF_PORT", 00:14:18.266 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:18.266 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:18.266 "hdgst": ${hdgst:-false}, 00:14:18.266 "ddgst": ${ddgst:-false} 00:14:18.266 }, 00:14:18.266 "method": "bdev_nvme_attach_controller" 00:14:18.266 } 00:14:18.266 EOF 00:14:18.266 )") 00:14:18.266 06:06:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@578 -- # cat 00:14:18.525 06:06:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@580 -- # jq . 00:14:18.525 06:06:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@581 -- # IFS=, 00:14:18.525 06:06:43 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:14:18.525 "params": { 00:14:18.525 "name": "Nvme1", 00:14:18.525 "trtype": "tcp", 00:14:18.525 "traddr": "10.0.0.3", 00:14:18.525 "adrfam": "ipv4", 00:14:18.525 "trsvcid": "4420", 00:14:18.525 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:18.525 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:18.525 "hdgst": false, 00:14:18.525 "ddgst": false 00:14:18.525 }, 00:14:18.525 "method": "bdev_nvme_attach_controller" 00:14:18.525 }' 00:14:18.525 [2024-10-01 06:06:43.930783] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:14:18.525 [2024-10-01 06:06:43.931084] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid82793 ] 00:14:18.525 [2024-10-01 06:06:44.069639] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:18.784 [2024-10-01 06:06:44.176986] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:14:18.784 [2024-10-01 06:06:44.177098] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:14:18.784 [2024-10-01 06:06:44.177105] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:14:18.784 [2024-10-01 06:06:44.191618] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:18.784 I/O targets: 00:14:18.784 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:14:18.784 00:14:18.784 00:14:18.784 CUnit - A unit testing framework for C - Version 2.1-3 00:14:18.784 http://cunit.sourceforge.net/ 00:14:18.784 00:14:18.784 00:14:18.785 Suite: bdevio tests on: Nvme1n1 00:14:18.785 Test: blockdev write read block ...passed 00:14:18.785 Test: blockdev write zeroes read block ...passed 00:14:18.785 Test: blockdev write zeroes read no split ...passed 00:14:18.785 Test: blockdev write zeroes read split ...passed 00:14:18.785 Test: blockdev write zeroes read split partial ...passed 00:14:18.785 Test: blockdev reset ...[2024-10-01 06:06:44.398463] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:14:18.785 [2024-10-01 06:06:44.398709] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x212c2d0 (9): Bad file descriptor 00:14:19.044 [2024-10-01 06:06:44.417175] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:14:19.044 passed 00:14:19.044 Test: blockdev write read 8 blocks ...passed 00:14:19.044 Test: blockdev write read size > 128k ...passed 00:14:19.044 Test: blockdev write read invalid size ...passed 00:14:19.044 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:14:19.044 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:14:19.044 Test: blockdev write read max offset ...passed 00:14:19.044 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:14:19.044 Test: blockdev writev readv 8 blocks ...passed 00:14:19.044 Test: blockdev writev readv 30 x 1block ...passed 00:14:19.044 Test: blockdev writev readv block ...passed 00:14:19.044 Test: blockdev writev readv size > 128k ...passed 00:14:19.044 Test: blockdev writev readv size > 128k in two iovs ...passed 00:14:19.044 Test: blockdev comparev and writev ...[2024-10-01 06:06:44.427376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:19.044 [2024-10-01 06:06:44.427591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:14:19.044 [2024-10-01 06:06:44.427625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:19.044 [2024-10-01 06:06:44.427640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:14:19.044 [2024-10-01 06:06:44.427968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:19.044 [2024-10-01 06:06:44.427991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:14:19.044 [2024-10-01 06:06:44.428011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:19.044 [2024-10-01 06:06:44.428023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:14:19.044 [2024-10-01 06:06:44.428340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:19.044 [2024-10-01 06:06:44.428377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:14:19.044 [2024-10-01 06:06:44.428408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:19.044 [2024-10-01 06:06:44.428421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:14:19.044 [2024-10-01 06:06:44.428724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:19.044 [2024-10-01 06:06:44.428750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:14:19.044 [2024-10-01 06:06:44.428771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:14:19.044 [2024-10-01 06:06:44.428783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:14:19.044 passed 00:14:19.044 Test: blockdev nvme passthru rw ...passed 00:14:19.044 Test: blockdev nvme passthru vendor specific ...[2024-10-01 06:06:44.429773] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:19.044 [2024-10-01 06:06:44.429813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:14:19.044 [2024-10-01 06:06:44.429950] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:19.044 [2024-10-01 06:06:44.429990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:14:19.044 [2024-10-01 06:06:44.430123] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:19.044 [2024-10-01 06:06:44.430142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:14:19.044 [2024-10-01 06:06:44.430255] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:19.044 [2024-10-01 06:06:44.430273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:14:19.044 passed 00:14:19.044 Test: blockdev nvme admin passthru ...passed 00:14:19.044 Test: blockdev copy ...passed 00:14:19.044 00:14:19.044 Run Summary: Type Total Ran Passed Failed Inactive 00:14:19.044 suites 1 1 n/a 0 0 00:14:19.044 tests 23 23 23 0 0 00:14:19.044 asserts 152 152 152 0 n/a 00:14:19.044 00:14:19.044 Elapsed time = 0.180 seconds 00:14:19.303 06:06:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:19.303 06:06:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:19.303 06:06:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:19.303 06:06:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:19.303 06:06:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:14:19.303 06:06:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:14:19.303 06:06:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # nvmfcleanup 00:14:19.303 06:06:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:14:19.303 06:06:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:19.303 06:06:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:14:19.303 06:06:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:19.303 06:06:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:19.303 rmmod nvme_tcp 00:14:19.303 rmmod nvme_fabrics 00:14:19.303 rmmod nvme_keyring 00:14:19.303 06:06:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:19.303 06:06:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:14:19.303 06:06:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:14:19.303 06:06:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@513 -- # '[' -n 82770 ']' 00:14:19.303 06:06:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@514 -- # killprocess 82770 00:14:19.303 06:06:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@950 -- # '[' -z 82770 ']' 00:14:19.303 06:06:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # kill -0 82770 00:14:19.303 06:06:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # uname 00:14:19.303 06:06:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:19.303 06:06:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 82770 00:14:19.303 killing process with pid 82770 00:14:19.303 06:06:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:14:19.303 06:06:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:14:19.303 06:06:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@968 -- # echo 'killing process with pid 82770' 00:14:19.303 06:06:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@969 -- # kill 82770 00:14:19.303 06:06:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@974 -- # wait 82770 00:14:19.871 06:06:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:14:19.871 06:06:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:14:19.871 06:06:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:14:19.871 06:06:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:14:19.871 06:06:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@787 -- # iptables-save 00:14:19.871 06:06:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@787 -- # iptables-restore 00:14:19.871 06:06:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:14:19.871 06:06:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:19.871 06:06:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:14:19.871 06:06:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:14:19.871 06:06:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:14:19.871 06:06:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:14:19.871 06:06:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:14:19.871 06:06:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:14:19.871 06:06:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:14:19.871 06:06:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:14:19.871 06:06:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:14:19.871 06:06:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:14:19.871 06:06:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:14:19.871 06:06:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:14:19.871 06:06:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:19.871 06:06:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:19.871 06:06:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@246 -- # remove_spdk_ns 00:14:19.871 06:06:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:19.871 06:06:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:19.871 06:06:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:19.871 06:06:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@300 -- # return 0 00:14:19.871 00:14:19.871 real 0m2.741s 00:14:19.871 user 0m7.319s 00:14:19.871 sys 0m1.309s 00:14:19.871 06:06:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:19.871 ************************************ 00:14:19.871 END TEST nvmf_bdevio_no_huge 00:14:19.871 ************************************ 00:14:19.871 06:06:45 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:14:20.130 06:06:45 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:14:20.130 06:06:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:20.130 06:06:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:20.130 06:06:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:20.130 ************************************ 00:14:20.130 START TEST nvmf_tls 00:14:20.130 ************************************ 00:14:20.130 06:06:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:14:20.130 * Looking for test storage... 00:14:20.130 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:20.130 06:06:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:14:20.130 06:06:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1681 -- # lcov --version 00:14:20.130 06:06:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:14:20.130 06:06:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:14:20.130 06:06:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:20.130 06:06:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:20.130 06:06:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:20.130 06:06:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:14:20.130 06:06:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:14:20.130 06:06:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:14:20.130 06:06:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:14:20.130 06:06:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:14:20.130 06:06:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:14:20.130 06:06:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:14:20.130 06:06:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:20.130 06:06:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:14:20.130 06:06:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:14:20.130 06:06:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:20.130 06:06:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:20.130 06:06:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:14:20.130 06:06:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:14:20.130 06:06:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:20.130 06:06:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:14:20.130 06:06:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:14:20.130 06:06:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:14:20.131 06:06:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:14:20.131 06:06:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:20.131 06:06:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:14:20.131 06:06:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:14:20.131 06:06:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:20.131 06:06:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:20.131 06:06:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:14:20.131 06:06:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:20.131 06:06:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:14:20.131 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:20.131 --rc genhtml_branch_coverage=1 00:14:20.131 --rc genhtml_function_coverage=1 00:14:20.131 --rc genhtml_legend=1 00:14:20.131 --rc geninfo_all_blocks=1 00:14:20.131 --rc geninfo_unexecuted_blocks=1 00:14:20.131 00:14:20.131 ' 00:14:20.131 06:06:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:14:20.131 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:20.131 --rc genhtml_branch_coverage=1 00:14:20.131 --rc genhtml_function_coverage=1 00:14:20.131 --rc genhtml_legend=1 00:14:20.131 --rc geninfo_all_blocks=1 00:14:20.131 --rc geninfo_unexecuted_blocks=1 00:14:20.131 00:14:20.131 ' 00:14:20.131 06:06:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:14:20.131 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:20.131 --rc genhtml_branch_coverage=1 00:14:20.131 --rc genhtml_function_coverage=1 00:14:20.131 --rc genhtml_legend=1 00:14:20.131 --rc geninfo_all_blocks=1 00:14:20.131 --rc geninfo_unexecuted_blocks=1 00:14:20.131 00:14:20.131 ' 00:14:20.131 06:06:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:14:20.131 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:20.131 --rc genhtml_branch_coverage=1 00:14:20.131 --rc genhtml_function_coverage=1 00:14:20.131 --rc genhtml_legend=1 00:14:20.131 --rc geninfo_all_blocks=1 00:14:20.131 --rc geninfo_unexecuted_blocks=1 00:14:20.131 00:14:20.131 ' 00:14:20.131 06:06:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:20.131 06:06:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:14:20.131 06:06:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:20.131 06:06:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:20.131 06:06:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:20.131 06:06:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:20.131 06:06:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:20.131 06:06:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:20.131 06:06:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:20.131 06:06:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:20.131 06:06:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:20.131 06:06:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:20.131 06:06:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 00:14:20.131 06:06:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=a979a798-a221-4879-b3c4-5aaa753fde06 00:14:20.131 06:06:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:20.131 06:06:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:20.131 06:06:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:20.131 06:06:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:20.131 06:06:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:20.131 06:06:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:14:20.131 06:06:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:20.131 06:06:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:20.131 06:06:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:20.131 06:06:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:20.131 06:06:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:20.131 06:06:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:20.131 06:06:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:14:20.131 06:06:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:20.131 06:06:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:14:20.131 06:06:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:20.131 06:06:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:20.131 06:06:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:20.131 06:06:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:20.131 06:06:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:20.131 06:06:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:20.131 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:20.131 06:06:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:20.131 06:06:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:20.131 06:06:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:20.131 06:06:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:20.131 06:06:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:14:20.131 06:06:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:14:20.131 06:06:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:20.131 06:06:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@472 -- # prepare_net_devs 00:14:20.131 06:06:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@434 -- # local -g is_hw=no 00:14:20.131 06:06:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@436 -- # remove_spdk_ns 00:14:20.131 06:06:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:20.131 06:06:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:20.131 06:06:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:20.131 06:06:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:14:20.131 06:06:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:14:20.131 06:06:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:14:20.131 06:06:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:14:20.131 06:06:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:14:20.131 06:06:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@456 -- # nvmf_veth_init 00:14:20.131 06:06:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:20.131 06:06:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:14:20.131 06:06:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:14:20.131 06:06:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:14:20.131 06:06:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:20.131 06:06:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:14:20.131 06:06:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:20.131 06:06:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:14:20.131 06:06:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:20.131 06:06:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:14:20.131 06:06:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:20.131 06:06:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:20.131 06:06:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:20.131 06:06:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:20.131 06:06:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:20.132 06:06:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:20.132 06:06:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:14:20.132 Cannot find device "nvmf_init_br" 00:14:20.132 06:06:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@162 -- # true 00:14:20.132 06:06:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:14:20.390 Cannot find device "nvmf_init_br2" 00:14:20.390 06:06:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@163 -- # true 00:14:20.390 06:06:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:14:20.390 Cannot find device "nvmf_tgt_br" 00:14:20.390 06:06:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@164 -- # true 00:14:20.390 06:06:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:14:20.390 Cannot find device "nvmf_tgt_br2" 00:14:20.390 06:06:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@165 -- # true 00:14:20.390 06:06:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:14:20.390 Cannot find device "nvmf_init_br" 00:14:20.390 06:06:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@166 -- # true 00:14:20.390 06:06:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:14:20.390 Cannot find device "nvmf_init_br2" 00:14:20.390 06:06:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@167 -- # true 00:14:20.390 06:06:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:14:20.390 Cannot find device "nvmf_tgt_br" 00:14:20.390 06:06:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@168 -- # true 00:14:20.390 06:06:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:14:20.390 Cannot find device "nvmf_tgt_br2" 00:14:20.390 06:06:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@169 -- # true 00:14:20.390 06:06:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:14:20.390 Cannot find device "nvmf_br" 00:14:20.390 06:06:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@170 -- # true 00:14:20.390 06:06:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:14:20.390 Cannot find device "nvmf_init_if" 00:14:20.390 06:06:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@171 -- # true 00:14:20.390 06:06:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:14:20.390 Cannot find device "nvmf_init_if2" 00:14:20.390 06:06:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@172 -- # true 00:14:20.390 06:06:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:20.390 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:20.390 06:06:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@173 -- # true 00:14:20.390 06:06:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:20.390 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:20.390 06:06:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@174 -- # true 00:14:20.390 06:06:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:14:20.390 06:06:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:20.390 06:06:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:14:20.390 06:06:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:20.390 06:06:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:20.390 06:06:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:20.390 06:06:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:20.390 06:06:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:20.390 06:06:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:14:20.390 06:06:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:14:20.390 06:06:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:14:20.390 06:06:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:14:20.390 06:06:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:14:20.390 06:06:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:14:20.390 06:06:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:14:20.390 06:06:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:14:20.390 06:06:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:14:20.390 06:06:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:20.390 06:06:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:20.391 06:06:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:20.391 06:06:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:14:20.391 06:06:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:14:20.391 06:06:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:14:20.391 06:06:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:14:20.649 06:06:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:20.649 06:06:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:20.649 06:06:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:20.649 06:06:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:14:20.649 06:06:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:14:20.649 06:06:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:14:20.649 06:06:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:20.649 06:06:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:14:20.649 06:06:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:14:20.649 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:20.649 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.061 ms 00:14:20.649 00:14:20.649 --- 10.0.0.3 ping statistics --- 00:14:20.649 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:20.649 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:14:20.649 06:06:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:14:20.649 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:14:20.649 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.050 ms 00:14:20.649 00:14:20.649 --- 10.0.0.4 ping statistics --- 00:14:20.649 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:20.649 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:14:20.649 06:06:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:20.649 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:20.649 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.037 ms 00:14:20.649 00:14:20.649 --- 10.0.0.1 ping statistics --- 00:14:20.649 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:20.649 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:14:20.649 06:06:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:14:20.649 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:20.649 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.041 ms 00:14:20.649 00:14:20.649 --- 10.0.0.2 ping statistics --- 00:14:20.649 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:20.649 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:14:20.649 06:06:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:20.649 06:06:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@457 -- # return 0 00:14:20.649 06:06:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:14:20.649 06:06:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:20.649 06:06:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:14:20.649 06:06:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:14:20.649 06:06:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:20.649 06:06:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:14:20.649 06:06:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:14:20.649 06:06:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:14:20.649 06:06:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:14:20.650 06:06:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:20.650 06:06:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:20.650 06:06:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # nvmfpid=83030 00:14:20.650 06:06:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # waitforlisten 83030 00:14:20.650 06:06:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:14:20.650 06:06:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 83030 ']' 00:14:20.650 06:06:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:20.650 06:06:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:20.650 06:06:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:20.650 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:20.650 06:06:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:20.650 06:06:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:20.650 [2024-10-01 06:06:46.174331] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:14:20.650 [2024-10-01 06:06:46.174427] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:20.909 [2024-10-01 06:06:46.317231] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:20.909 [2024-10-01 06:06:46.359201] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:20.909 [2024-10-01 06:06:46.359561] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:20.909 [2024-10-01 06:06:46.359587] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:20.909 [2024-10-01 06:06:46.359597] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:20.909 [2024-10-01 06:06:46.359606] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:20.909 [2024-10-01 06:06:46.359640] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:14:20.909 06:06:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:20.909 06:06:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:14:20.909 06:06:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:14:20.909 06:06:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:20.909 06:06:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:20.909 06:06:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:20.909 06:06:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:14:20.909 06:06:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:14:21.168 true 00:14:21.168 06:06:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:14:21.168 06:06:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:14:21.427 06:06:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:14:21.427 06:06:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:14:21.427 06:06:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:14:21.996 06:06:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:14:21.996 06:06:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:14:21.996 06:06:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:14:21.996 06:06:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:14:21.996 06:06:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:14:22.564 06:06:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:14:22.564 06:06:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:14:22.822 06:06:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:14:22.822 06:06:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:14:22.822 06:06:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:14:22.822 06:06:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:14:23.081 06:06:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:14:23.082 06:06:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:14:23.082 06:06:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:14:23.340 06:06:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:14:23.340 06:06:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:14:23.607 06:06:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:14:23.607 06:06:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:14:23.607 06:06:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:14:24.179 06:06:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:14:24.179 06:06:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:14:24.438 06:06:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:14:24.439 06:06:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:14:24.439 06:06:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:14:24.439 06:06:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@739 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:14:24.439 06:06:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@726 -- # local prefix key digest 00:14:24.439 06:06:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # prefix=NVMeTLSkey-1 00:14:24.439 06:06:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # key=00112233445566778899aabbccddeeff 00:14:24.439 06:06:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # digest=1 00:14:24.439 06:06:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@729 -- # python - 00:14:24.439 06:06:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:14:24.439 06:06:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:14:24.439 06:06:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@739 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:14:24.439 06:06:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@726 -- # local prefix key digest 00:14:24.439 06:06:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # prefix=NVMeTLSkey-1 00:14:24.439 06:06:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # key=ffeeddccbbaa99887766554433221100 00:14:24.439 06:06:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # digest=1 00:14:24.439 06:06:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@729 -- # python - 00:14:24.439 06:06:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:14:24.439 06:06:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:14:24.439 06:06:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.OlzAw3YFTC 00:14:24.439 06:06:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:14:24.439 06:06:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.lO6MNk3KrJ 00:14:24.439 06:06:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:14:24.439 06:06:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:14:24.439 06:06:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.OlzAw3YFTC 00:14:24.439 06:06:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.lO6MNk3KrJ 00:14:24.439 06:06:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:14:24.698 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:14:25.266 [2024-10-01 06:06:50.622155] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:25.266 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.OlzAw3YFTC 00:14:25.266 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.OlzAw3YFTC 00:14:25.266 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:14:25.525 [2024-10-01 06:06:50.968230] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:25.525 06:06:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:14:25.784 06:06:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:14:26.043 [2024-10-01 06:06:51.628517] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:26.043 [2024-10-01 06:06:51.628887] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:26.043 06:06:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:14:26.612 malloc0 00:14:26.612 06:06:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:14:26.872 06:06:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.OlzAw3YFTC 00:14:27.131 06:06:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:14:27.390 06:06:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.OlzAw3YFTC 00:14:37.367 Initializing NVMe Controllers 00:14:37.367 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:14:37.367 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:14:37.367 Initialization complete. Launching workers. 00:14:37.367 ======================================================== 00:14:37.367 Latency(us) 00:14:37.367 Device Information : IOPS MiB/s Average min max 00:14:37.367 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 9747.19 38.07 6567.12 2053.96 10514.80 00:14:37.367 ======================================================== 00:14:37.367 Total : 9747.19 38.07 6567.12 2053.96 10514.80 00:14:37.367 00:14:37.625 06:07:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.OlzAw3YFTC 00:14:37.625 06:07:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:37.625 06:07:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:14:37.625 06:07:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:14:37.625 06:07:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.OlzAw3YFTC 00:14:37.625 06:07:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:37.625 06:07:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=83272 00:14:37.625 06:07:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:37.625 06:07:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:37.625 06:07:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 83272 /var/tmp/bdevperf.sock 00:14:37.625 06:07:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 83272 ']' 00:14:37.625 06:07:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:37.625 06:07:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:37.625 06:07:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:37.625 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:37.625 06:07:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:37.625 06:07:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:37.625 [2024-10-01 06:07:03.045511] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:14:37.625 [2024-10-01 06:07:03.046818] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83272 ] 00:14:37.625 [2024-10-01 06:07:03.190626] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:37.625 [2024-10-01 06:07:03.231959] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:14:37.884 [2024-10-01 06:07:03.268425] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:37.884 06:07:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:37.884 06:07:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:14:37.884 06:07:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.OlzAw3YFTC 00:14:38.143 06:07:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:14:38.402 [2024-10-01 06:07:03.803058] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:38.402 TLSTESTn1 00:14:38.402 06:07:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:14:38.661 Running I/O for 10 seconds... 00:14:48.463 4419.00 IOPS, 17.26 MiB/s 4326.00 IOPS, 16.90 MiB/s 4232.67 IOPS, 16.53 MiB/s 4160.50 IOPS, 16.25 MiB/s 4100.40 IOPS, 16.02 MiB/s 4120.83 IOPS, 16.10 MiB/s 4095.43 IOPS, 16.00 MiB/s 4065.62 IOPS, 15.88 MiB/s 4007.22 IOPS, 15.65 MiB/s 3932.90 IOPS, 15.36 MiB/s 00:14:48.463 Latency(us) 00:14:48.463 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:48.463 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:14:48.463 Verification LBA range: start 0x0 length 0x2000 00:14:48.463 TLSTESTn1 : 10.02 3938.65 15.39 0.00 0.00 32437.99 6523.81 29669.93 00:14:48.463 =================================================================================================================== 00:14:48.463 Total : 3938.65 15.39 0.00 0.00 32437.99 6523.81 29669.93 00:14:48.463 { 00:14:48.463 "results": [ 00:14:48.463 { 00:14:48.463 "job": "TLSTESTn1", 00:14:48.463 "core_mask": "0x4", 00:14:48.463 "workload": "verify", 00:14:48.463 "status": "finished", 00:14:48.463 "verify_range": { 00:14:48.463 "start": 0, 00:14:48.463 "length": 8192 00:14:48.463 }, 00:14:48.463 "queue_depth": 128, 00:14:48.463 "io_size": 4096, 00:14:48.463 "runtime": 10.017645, 00:14:48.463 "iops": 3938.650251630997, 00:14:48.463 "mibps": 15.385352545433582, 00:14:48.463 "io_failed": 0, 00:14:48.463 "io_timeout": 0, 00:14:48.463 "avg_latency_us": 32437.99499225835, 00:14:48.463 "min_latency_us": 6523.810909090909, 00:14:48.463 "max_latency_us": 29669.934545454544 00:14:48.463 } 00:14:48.463 ], 00:14:48.463 "core_count": 1 00:14:48.463 } 00:14:48.463 06:07:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:48.463 06:07:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 83272 00:14:48.463 06:07:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 83272 ']' 00:14:48.463 06:07:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 83272 00:14:48.463 06:07:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:14:48.463 06:07:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:48.721 06:07:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83272 00:14:48.721 killing process with pid 83272 00:14:48.721 Received shutdown signal, test time was about 10.000000 seconds 00:14:48.721 00:14:48.721 Latency(us) 00:14:48.721 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:48.721 =================================================================================================================== 00:14:48.721 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:48.721 06:07:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:14:48.721 06:07:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:14:48.721 06:07:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83272' 00:14:48.721 06:07:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 83272 00:14:48.721 06:07:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 83272 00:14:48.721 06:07:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.lO6MNk3KrJ 00:14:48.721 06:07:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:14:48.721 06:07:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.lO6MNk3KrJ 00:14:48.721 06:07:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:14:48.721 06:07:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:48.721 06:07:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:14:48.721 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:48.721 06:07:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:48.722 06:07:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.lO6MNk3KrJ 00:14:48.722 06:07:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:48.722 06:07:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:14:48.722 06:07:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:14:48.722 06:07:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.lO6MNk3KrJ 00:14:48.722 06:07:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:48.722 06:07:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=83399 00:14:48.722 06:07:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:48.722 06:07:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:48.722 06:07:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 83399 /var/tmp/bdevperf.sock 00:14:48.722 06:07:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 83399 ']' 00:14:48.722 06:07:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:48.722 06:07:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:48.722 06:07:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:48.722 06:07:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:48.722 06:07:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:48.722 [2024-10-01 06:07:14.322293] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:14:48.722 [2024-10-01 06:07:14.322654] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83399 ] 00:14:48.980 [2024-10-01 06:07:14.461734] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:48.980 [2024-10-01 06:07:14.507573] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:14:48.980 [2024-10-01 06:07:14.542176] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:49.238 06:07:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:49.238 06:07:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:14:49.238 06:07:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.lO6MNk3KrJ 00:14:49.496 06:07:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:14:49.755 [2024-10-01 06:07:15.189204] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:49.755 [2024-10-01 06:07:15.198657] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:14:49.755 [2024-10-01 06:07:15.199124] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x156cd30 (107): Transport endpoint is not connected 00:14:49.755 [2024-10-01 06:07:15.200115] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x156cd30 (9): Bad file descriptor 00:14:49.755 [2024-10-01 06:07:15.201111] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:14:49.755 [2024-10-01 06:07:15.201324] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:14:49.755 [2024-10-01 06:07:15.201462] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:14:49.755 [2024-10-01 06:07:15.201483] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:14:49.755 request: 00:14:49.755 { 00:14:49.755 "name": "TLSTEST", 00:14:49.755 "trtype": "tcp", 00:14:49.755 "traddr": "10.0.0.3", 00:14:49.755 "adrfam": "ipv4", 00:14:49.755 "trsvcid": "4420", 00:14:49.755 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:49.755 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:49.755 "prchk_reftag": false, 00:14:49.755 "prchk_guard": false, 00:14:49.755 "hdgst": false, 00:14:49.755 "ddgst": false, 00:14:49.755 "psk": "key0", 00:14:49.755 "allow_unrecognized_csi": false, 00:14:49.755 "method": "bdev_nvme_attach_controller", 00:14:49.755 "req_id": 1 00:14:49.755 } 00:14:49.755 Got JSON-RPC error response 00:14:49.755 response: 00:14:49.755 { 00:14:49.755 "code": -5, 00:14:49.755 "message": "Input/output error" 00:14:49.755 } 00:14:49.755 06:07:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 83399 00:14:49.755 06:07:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 83399 ']' 00:14:49.755 06:07:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 83399 00:14:49.755 06:07:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:14:49.755 06:07:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:49.755 06:07:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83399 00:14:49.755 killing process with pid 83399 00:14:49.756 Received shutdown signal, test time was about 10.000000 seconds 00:14:49.756 00:14:49.756 Latency(us) 00:14:49.756 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:49.756 =================================================================================================================== 00:14:49.756 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:49.756 06:07:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:14:49.756 06:07:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:14:49.756 06:07:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83399' 00:14:49.756 06:07:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 83399 00:14:49.756 06:07:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 83399 00:14:50.015 06:07:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:14:50.015 06:07:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:14:50.015 06:07:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:50.015 06:07:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:50.016 06:07:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:50.016 06:07:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.OlzAw3YFTC 00:14:50.016 06:07:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:14:50.016 06:07:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.OlzAw3YFTC 00:14:50.016 06:07:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:14:50.016 06:07:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:50.016 06:07:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:14:50.016 06:07:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:50.016 06:07:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.OlzAw3YFTC 00:14:50.016 06:07:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:50.016 06:07:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:14:50.016 06:07:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:14:50.016 06:07:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.OlzAw3YFTC 00:14:50.016 06:07:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:50.016 06:07:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=83420 00:14:50.016 06:07:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:50.016 06:07:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:50.016 06:07:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 83420 /var/tmp/bdevperf.sock 00:14:50.016 06:07:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 83420 ']' 00:14:50.016 06:07:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:50.016 06:07:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:50.016 06:07:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:50.016 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:50.016 06:07:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:50.016 06:07:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:50.016 [2024-10-01 06:07:15.463625] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:14:50.016 [2024-10-01 06:07:15.464010] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83420 ] 00:14:50.016 [2024-10-01 06:07:15.604063] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:50.274 [2024-10-01 06:07:15.641716] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:14:50.274 [2024-10-01 06:07:15.673115] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:50.274 06:07:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:50.274 06:07:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:14:50.274 06:07:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.OlzAw3YFTC 00:14:50.533 06:07:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:14:50.809 [2024-10-01 06:07:16.318798] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:50.809 [2024-10-01 06:07:16.328722] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:14:50.809 [2024-10-01 06:07:16.328764] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:14:50.809 [2024-10-01 06:07:16.328828] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:14:50.809 [2024-10-01 06:07:16.329338] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2430d30 (107): Transport endpoint is not connected 00:14:50.809 [2024-10-01 06:07:16.330329] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2430d30 (9): Bad file descriptor 00:14:50.809 [2024-10-01 06:07:16.331325] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:14:50.809 [2024-10-01 06:07:16.331348] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:14:50.809 [2024-10-01 06:07:16.331374] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:14:50.809 [2024-10-01 06:07:16.331384] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:14:50.809 request: 00:14:50.809 { 00:14:50.809 "name": "TLSTEST", 00:14:50.809 "trtype": "tcp", 00:14:50.809 "traddr": "10.0.0.3", 00:14:50.809 "adrfam": "ipv4", 00:14:50.809 "trsvcid": "4420", 00:14:50.809 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:50.809 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:14:50.809 "prchk_reftag": false, 00:14:50.809 "prchk_guard": false, 00:14:50.809 "hdgst": false, 00:14:50.809 "ddgst": false, 00:14:50.809 "psk": "key0", 00:14:50.809 "allow_unrecognized_csi": false, 00:14:50.809 "method": "bdev_nvme_attach_controller", 00:14:50.809 "req_id": 1 00:14:50.809 } 00:14:50.809 Got JSON-RPC error response 00:14:50.809 response: 00:14:50.809 { 00:14:50.809 "code": -5, 00:14:50.809 "message": "Input/output error" 00:14:50.809 } 00:14:50.809 06:07:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 83420 00:14:50.809 06:07:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 83420 ']' 00:14:50.809 06:07:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 83420 00:14:50.809 06:07:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:14:50.809 06:07:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:50.809 06:07:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83420 00:14:50.809 killing process with pid 83420 00:14:50.809 Received shutdown signal, test time was about 10.000000 seconds 00:14:50.809 00:14:50.809 Latency(us) 00:14:50.809 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:50.809 =================================================================================================================== 00:14:50.809 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:50.809 06:07:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:14:50.809 06:07:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:14:50.809 06:07:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83420' 00:14:50.809 06:07:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 83420 00:14:50.809 06:07:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 83420 00:14:51.069 06:07:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:14:51.069 06:07:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:14:51.069 06:07:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:51.069 06:07:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:51.069 06:07:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:51.069 06:07:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.OlzAw3YFTC 00:14:51.069 06:07:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:14:51.069 06:07:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.OlzAw3YFTC 00:14:51.069 06:07:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:14:51.069 06:07:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:51.069 06:07:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:14:51.069 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:51.069 06:07:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:51.069 06:07:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.OlzAw3YFTC 00:14:51.069 06:07:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:51.069 06:07:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:14:51.069 06:07:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:14:51.069 06:07:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.OlzAw3YFTC 00:14:51.069 06:07:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:51.069 06:07:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=83441 00:14:51.069 06:07:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:51.069 06:07:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 83441 /var/tmp/bdevperf.sock 00:14:51.069 06:07:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:51.069 06:07:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 83441 ']' 00:14:51.069 06:07:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:51.069 06:07:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:51.069 06:07:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:51.069 06:07:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:51.069 06:07:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:51.069 [2024-10-01 06:07:16.578904] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:14:51.069 [2024-10-01 06:07:16.579202] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83441 ] 00:14:51.328 [2024-10-01 06:07:16.711636] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:51.328 [2024-10-01 06:07:16.747266] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:14:51.328 [2024-10-01 06:07:16.776812] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:51.328 06:07:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:51.328 06:07:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:14:51.328 06:07:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.OlzAw3YFTC 00:14:51.587 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:14:51.846 [2024-10-01 06:07:17.293167] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:51.846 [2024-10-01 06:07:17.298846] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:14:51.846 [2024-10-01 06:07:17.299121] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:14:51.846 [2024-10-01 06:07:17.299325] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:14:51.846 [2024-10-01 06:07:17.299967] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x582d30 (107): Transport endpoint is not connected 00:14:51.846 [2024-10-01 06:07:17.300949] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x582d30 (9): Bad file descriptor 00:14:51.846 [2024-10-01 06:07:17.301948] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:14:51.846 [2024-10-01 06:07:17.301994] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:14:51.846 [2024-10-01 06:07:17.302022] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:14:51.846 [2024-10-01 06:07:17.302032] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:14:51.846 request: 00:14:51.846 { 00:14:51.846 "name": "TLSTEST", 00:14:51.846 "trtype": "tcp", 00:14:51.846 "traddr": "10.0.0.3", 00:14:51.846 "adrfam": "ipv4", 00:14:51.846 "trsvcid": "4420", 00:14:51.846 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:14:51.846 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:51.846 "prchk_reftag": false, 00:14:51.846 "prchk_guard": false, 00:14:51.846 "hdgst": false, 00:14:51.846 "ddgst": false, 00:14:51.846 "psk": "key0", 00:14:51.846 "allow_unrecognized_csi": false, 00:14:51.846 "method": "bdev_nvme_attach_controller", 00:14:51.846 "req_id": 1 00:14:51.846 } 00:14:51.846 Got JSON-RPC error response 00:14:51.846 response: 00:14:51.846 { 00:14:51.846 "code": -5, 00:14:51.846 "message": "Input/output error" 00:14:51.846 } 00:14:51.847 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 83441 00:14:51.847 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 83441 ']' 00:14:51.847 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 83441 00:14:51.847 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:14:51.847 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:51.847 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83441 00:14:51.847 killing process with pid 83441 00:14:51.847 Received shutdown signal, test time was about 10.000000 seconds 00:14:51.847 00:14:51.847 Latency(us) 00:14:51.847 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:51.847 =================================================================================================================== 00:14:51.847 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:51.847 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:14:51.847 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:14:51.847 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83441' 00:14:51.847 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 83441 00:14:51.847 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 83441 00:14:52.106 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:14:52.106 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:14:52.106 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:52.106 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:52.106 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:52.106 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:14:52.106 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:14:52.106 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:14:52.106 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:14:52.106 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:52.106 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:14:52.106 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:52.106 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:52.106 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:14:52.106 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:52.106 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:14:52.106 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:14:52.106 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:14:52.106 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:52.106 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=83462 00:14:52.106 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:52.106 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 83462 /var/tmp/bdevperf.sock 00:14:52.106 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:52.106 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 83462 ']' 00:14:52.106 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:52.106 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:52.106 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:52.106 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:52.106 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:52.106 [2024-10-01 06:07:17.553748] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:14:52.106 [2024-10-01 06:07:17.554090] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83462 ] 00:14:52.106 [2024-10-01 06:07:17.686661] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:52.365 [2024-10-01 06:07:17.724998] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:14:52.365 [2024-10-01 06:07:17.754783] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:52.365 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:52.365 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:14:52.365 06:07:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:14:52.625 [2024-10-01 06:07:18.022637] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:14:52.625 [2024-10-01 06:07:18.022866] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:14:52.625 request: 00:14:52.625 { 00:14:52.625 "name": "key0", 00:14:52.625 "path": "", 00:14:52.625 "method": "keyring_file_add_key", 00:14:52.625 "req_id": 1 00:14:52.625 } 00:14:52.625 Got JSON-RPC error response 00:14:52.625 response: 00:14:52.625 { 00:14:52.625 "code": -1, 00:14:52.625 "message": "Operation not permitted" 00:14:52.625 } 00:14:52.625 06:07:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:14:52.884 [2024-10-01 06:07:18.318787] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:52.884 [2024-10-01 06:07:18.319120] bdev_nvme.c:6410:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:14:52.884 request: 00:14:52.884 { 00:14:52.884 "name": "TLSTEST", 00:14:52.884 "trtype": "tcp", 00:14:52.884 "traddr": "10.0.0.3", 00:14:52.884 "adrfam": "ipv4", 00:14:52.884 "trsvcid": "4420", 00:14:52.884 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:52.884 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:52.884 "prchk_reftag": false, 00:14:52.884 "prchk_guard": false, 00:14:52.884 "hdgst": false, 00:14:52.884 "ddgst": false, 00:14:52.884 "psk": "key0", 00:14:52.884 "allow_unrecognized_csi": false, 00:14:52.884 "method": "bdev_nvme_attach_controller", 00:14:52.884 "req_id": 1 00:14:52.884 } 00:14:52.884 Got JSON-RPC error response 00:14:52.884 response: 00:14:52.884 { 00:14:52.884 "code": -126, 00:14:52.884 "message": "Required key not available" 00:14:52.884 } 00:14:52.884 06:07:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 83462 00:14:52.884 06:07:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 83462 ']' 00:14:52.884 06:07:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 83462 00:14:52.884 06:07:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:14:52.884 06:07:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:52.884 06:07:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83462 00:14:52.884 killing process with pid 83462 00:14:52.884 Received shutdown signal, test time was about 10.000000 seconds 00:14:52.884 00:14:52.884 Latency(us) 00:14:52.884 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:52.884 =================================================================================================================== 00:14:52.884 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:52.884 06:07:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:14:52.884 06:07:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:14:52.884 06:07:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83462' 00:14:52.884 06:07:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 83462 00:14:52.885 06:07:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 83462 00:14:53.144 06:07:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:14:53.144 06:07:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:14:53.144 06:07:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:53.144 06:07:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:53.144 06:07:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:53.144 06:07:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 83030 00:14:53.144 06:07:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 83030 ']' 00:14:53.144 06:07:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 83030 00:14:53.144 06:07:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:14:53.144 06:07:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:53.144 06:07:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83030 00:14:53.144 killing process with pid 83030 00:14:53.144 06:07:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:14:53.144 06:07:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:14:53.144 06:07:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83030' 00:14:53.144 06:07:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 83030 00:14:53.144 06:07:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 83030 00:14:53.144 06:07:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:14:53.145 06:07:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@739 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:14:53.145 06:07:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@726 -- # local prefix key digest 00:14:53.145 06:07:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # prefix=NVMeTLSkey-1 00:14:53.145 06:07:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:14:53.145 06:07:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@728 -- # digest=2 00:14:53.145 06:07:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@729 -- # python - 00:14:53.145 06:07:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:14:53.145 06:07:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:14:53.145 06:07:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.yw2ZFMTuJx 00:14:53.145 06:07:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:14:53.145 06:07:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.yw2ZFMTuJx 00:14:53.145 06:07:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:14:53.145 06:07:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:14:53.145 06:07:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:53.145 06:07:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:53.145 06:07:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # nvmfpid=83493 00:14:53.145 06:07:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # waitforlisten 83493 00:14:53.145 06:07:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 83493 ']' 00:14:53.145 06:07:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:53.145 06:07:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:53.145 06:07:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:53.145 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:53.145 06:07:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:53.145 06:07:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:53.145 06:07:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:53.404 [2024-10-01 06:07:18.808360] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:14:53.404 [2024-10-01 06:07:18.809127] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:53.404 [2024-10-01 06:07:18.946096] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:53.404 [2024-10-01 06:07:18.980142] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:53.404 [2024-10-01 06:07:18.980197] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:53.404 [2024-10-01 06:07:18.980225] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:53.404 [2024-10-01 06:07:18.980232] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:53.404 [2024-10-01 06:07:18.980239] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:53.404 [2024-10-01 06:07:18.980264] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:14:53.404 [2024-10-01 06:07:19.012007] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:53.668 06:07:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:53.668 06:07:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:14:53.668 06:07:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:14:53.668 06:07:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:53.668 06:07:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:53.668 06:07:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:53.668 06:07:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.yw2ZFMTuJx 00:14:53.668 06:07:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.yw2ZFMTuJx 00:14:53.668 06:07:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:14:53.927 [2024-10-01 06:07:19.330230] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:53.927 06:07:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:14:54.186 06:07:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:14:54.445 [2024-10-01 06:07:19.846357] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:14:54.445 [2024-10-01 06:07:19.846572] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:14:54.445 06:07:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:14:54.704 malloc0 00:14:54.704 06:07:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:14:54.962 06:07:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.yw2ZFMTuJx 00:14:55.221 06:07:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:14:55.480 06:07:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.yw2ZFMTuJx 00:14:55.480 06:07:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:14:55.480 06:07:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:14:55.480 06:07:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:14:55.480 06:07:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.yw2ZFMTuJx 00:14:55.480 06:07:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:55.480 06:07:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=83541 00:14:55.480 06:07:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:14:55.480 06:07:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:55.480 06:07:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 83541 /var/tmp/bdevperf.sock 00:14:55.480 06:07:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 83541 ']' 00:14:55.480 06:07:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:55.480 06:07:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:55.480 06:07:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:55.480 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:55.480 06:07:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:55.480 06:07:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:14:55.480 [2024-10-01 06:07:20.928750] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:14:55.480 [2024-10-01 06:07:20.929033] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83541 ] 00:14:55.480 [2024-10-01 06:07:21.058624] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:55.480 [2024-10-01 06:07:21.095136] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:14:55.740 [2024-10-01 06:07:21.124971] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:14:55.740 06:07:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:55.740 06:07:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:14:55.740 06:07:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.yw2ZFMTuJx 00:14:55.999 06:07:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:14:56.258 [2024-10-01 06:07:21.689185] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:14:56.258 TLSTESTn1 00:14:56.258 06:07:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:14:56.516 Running I/O for 10 seconds... 00:15:06.412 4224.00 IOPS, 16.50 MiB/s 4285.50 IOPS, 16.74 MiB/s 4300.33 IOPS, 16.80 MiB/s 4282.50 IOPS, 16.73 MiB/s 4229.60 IOPS, 16.52 MiB/s 4190.17 IOPS, 16.37 MiB/s 4162.43 IOPS, 16.26 MiB/s 4142.75 IOPS, 16.18 MiB/s 4120.22 IOPS, 16.09 MiB/s 4108.70 IOPS, 16.05 MiB/s 00:15:06.412 Latency(us) 00:15:06.412 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:06.412 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:15:06.412 Verification LBA range: start 0x0 length 0x2000 00:15:06.412 TLSTESTn1 : 10.02 4114.95 16.07 0.00 0.00 31050.39 4944.99 28478.37 00:15:06.412 =================================================================================================================== 00:15:06.412 Total : 4114.95 16.07 0.00 0.00 31050.39 4944.99 28478.37 00:15:06.412 { 00:15:06.412 "results": [ 00:15:06.412 { 00:15:06.412 "job": "TLSTESTn1", 00:15:06.412 "core_mask": "0x4", 00:15:06.412 "workload": "verify", 00:15:06.412 "status": "finished", 00:15:06.412 "verify_range": { 00:15:06.412 "start": 0, 00:15:06.412 "length": 8192 00:15:06.412 }, 00:15:06.412 "queue_depth": 128, 00:15:06.412 "io_size": 4096, 00:15:06.412 "runtime": 10.015669, 00:15:06.412 "iops": 4114.952281270477, 00:15:06.412 "mibps": 16.0740323487128, 00:15:06.412 "io_failed": 0, 00:15:06.412 "io_timeout": 0, 00:15:06.412 "avg_latency_us": 31050.391910251146, 00:15:06.412 "min_latency_us": 4944.989090909091, 00:15:06.412 "max_latency_us": 28478.37090909091 00:15:06.412 } 00:15:06.412 ], 00:15:06.412 "core_count": 1 00:15:06.412 } 00:15:06.412 06:07:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:06.412 06:07:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 83541 00:15:06.412 06:07:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 83541 ']' 00:15:06.412 06:07:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 83541 00:15:06.412 06:07:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:15:06.412 06:07:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:06.412 06:07:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83541 00:15:06.412 killing process with pid 83541 00:15:06.412 Received shutdown signal, test time was about 10.000000 seconds 00:15:06.412 00:15:06.412 Latency(us) 00:15:06.412 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:06.412 =================================================================================================================== 00:15:06.412 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:06.412 06:07:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:15:06.412 06:07:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:15:06.412 06:07:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83541' 00:15:06.412 06:07:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 83541 00:15:06.412 06:07:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 83541 00:15:06.671 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.yw2ZFMTuJx 00:15:06.672 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.yw2ZFMTuJx 00:15:06.672 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:15:06.672 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.yw2ZFMTuJx 00:15:06.672 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:15:06.672 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:06.672 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:15:06.672 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:06.672 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.yw2ZFMTuJx 00:15:06.672 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:15:06.672 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:15:06.672 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:15:06.672 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.yw2ZFMTuJx 00:15:06.672 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:06.672 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:06.672 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=83669 00:15:06.672 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:15:06.672 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:06.672 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 83669 /var/tmp/bdevperf.sock 00:15:06.672 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 83669 ']' 00:15:06.672 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:06.672 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:06.672 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:06.672 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:06.672 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:06.672 [2024-10-01 06:07:32.150361] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:15:06.672 [2024-10-01 06:07:32.150451] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83669 ] 00:15:06.672 [2024-10-01 06:07:32.284195] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:06.931 [2024-10-01 06:07:32.322270] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:15:06.931 [2024-10-01 06:07:32.352950] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:06.931 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:06.931 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:15:06.931 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.yw2ZFMTuJx 00:15:07.190 [2024-10-01 06:07:32.725400] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.yw2ZFMTuJx': 0100666 00:15:07.190 [2024-10-01 06:07:32.725455] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:15:07.190 request: 00:15:07.190 { 00:15:07.190 "name": "key0", 00:15:07.190 "path": "/tmp/tmp.yw2ZFMTuJx", 00:15:07.190 "method": "keyring_file_add_key", 00:15:07.190 "req_id": 1 00:15:07.190 } 00:15:07.190 Got JSON-RPC error response 00:15:07.190 response: 00:15:07.190 { 00:15:07.190 "code": -1, 00:15:07.190 "message": "Operation not permitted" 00:15:07.190 } 00:15:07.190 06:07:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:15:07.448 [2024-10-01 06:07:32.989546] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:07.448 [2024-10-01 06:07:32.989605] bdev_nvme.c:6410:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:15:07.448 request: 00:15:07.448 { 00:15:07.448 "name": "TLSTEST", 00:15:07.448 "trtype": "tcp", 00:15:07.448 "traddr": "10.0.0.3", 00:15:07.448 "adrfam": "ipv4", 00:15:07.448 "trsvcid": "4420", 00:15:07.448 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:07.448 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:07.448 "prchk_reftag": false, 00:15:07.448 "prchk_guard": false, 00:15:07.448 "hdgst": false, 00:15:07.448 "ddgst": false, 00:15:07.448 "psk": "key0", 00:15:07.449 "allow_unrecognized_csi": false, 00:15:07.449 "method": "bdev_nvme_attach_controller", 00:15:07.449 "req_id": 1 00:15:07.449 } 00:15:07.449 Got JSON-RPC error response 00:15:07.449 response: 00:15:07.449 { 00:15:07.449 "code": -126, 00:15:07.449 "message": "Required key not available" 00:15:07.449 } 00:15:07.449 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 83669 00:15:07.449 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 83669 ']' 00:15:07.449 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 83669 00:15:07.449 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:15:07.449 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:07.449 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83669 00:15:07.449 killing process with pid 83669 00:15:07.449 Received shutdown signal, test time was about 10.000000 seconds 00:15:07.449 00:15:07.449 Latency(us) 00:15:07.449 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:07.449 =================================================================================================================== 00:15:07.449 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:07.449 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:15:07.449 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:15:07.449 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83669' 00:15:07.449 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 83669 00:15:07.449 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 83669 00:15:07.708 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:15:07.708 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:15:07.708 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:07.708 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:07.708 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:07.708 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 83493 00:15:07.708 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 83493 ']' 00:15:07.708 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 83493 00:15:07.708 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:15:07.708 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:07.708 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83493 00:15:07.708 killing process with pid 83493 00:15:07.708 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:15:07.708 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:15:07.708 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83493' 00:15:07.708 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 83493 00:15:07.708 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 83493 00:15:07.967 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:15:07.967 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:15:07.967 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:07.967 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:07.967 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # nvmfpid=83701 00:15:07.967 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:07.967 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # waitforlisten 83701 00:15:07.967 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 83701 ']' 00:15:07.967 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:07.967 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:07.967 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:07.967 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:07.967 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:07.967 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:07.967 [2024-10-01 06:07:33.432387] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:15:07.967 [2024-10-01 06:07:33.432487] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:07.967 [2024-10-01 06:07:33.572671] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:08.225 [2024-10-01 06:07:33.613292] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:08.225 [2024-10-01 06:07:33.613354] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:08.225 [2024-10-01 06:07:33.613368] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:08.225 [2024-10-01 06:07:33.613378] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:08.225 [2024-10-01 06:07:33.613387] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:08.225 [2024-10-01 06:07:33.613429] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:15:08.225 [2024-10-01 06:07:33.646500] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:08.225 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:08.225 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:15:08.225 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:15:08.225 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:08.225 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:08.225 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:08.225 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.yw2ZFMTuJx 00:15:08.225 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:15:08.225 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.yw2ZFMTuJx 00:15:08.225 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=setup_nvmf_tgt 00:15:08.225 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:08.225 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t setup_nvmf_tgt 00:15:08.225 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:08.225 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # setup_nvmf_tgt /tmp/tmp.yw2ZFMTuJx 00:15:08.225 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.yw2ZFMTuJx 00:15:08.225 06:07:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:15:08.485 [2024-10-01 06:07:33.988721] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:08.485 06:07:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:15:08.744 06:07:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:15:09.003 [2024-10-01 06:07:34.580842] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:09.003 [2024-10-01 06:07:34.581080] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:09.003 06:07:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:15:09.571 malloc0 00:15:09.571 06:07:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:15:09.571 06:07:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.yw2ZFMTuJx 00:15:10.140 [2024-10-01 06:07:35.467843] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.yw2ZFMTuJx': 0100666 00:15:10.140 [2024-10-01 06:07:35.467887] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:15:10.140 request: 00:15:10.140 { 00:15:10.140 "name": "key0", 00:15:10.140 "path": "/tmp/tmp.yw2ZFMTuJx", 00:15:10.140 "method": "keyring_file_add_key", 00:15:10.140 "req_id": 1 00:15:10.140 } 00:15:10.140 Got JSON-RPC error response 00:15:10.140 response: 00:15:10.140 { 00:15:10.140 "code": -1, 00:15:10.140 "message": "Operation not permitted" 00:15:10.140 } 00:15:10.140 06:07:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:15:10.140 [2024-10-01 06:07:35.716003] tcp.c:3792:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:15:10.140 [2024-10-01 06:07:35.716091] subsystem.c:1055:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:15:10.140 request: 00:15:10.140 { 00:15:10.140 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:10.140 "host": "nqn.2016-06.io.spdk:host1", 00:15:10.140 "psk": "key0", 00:15:10.140 "method": "nvmf_subsystem_add_host", 00:15:10.140 "req_id": 1 00:15:10.140 } 00:15:10.140 Got JSON-RPC error response 00:15:10.140 response: 00:15:10.140 { 00:15:10.140 "code": -32603, 00:15:10.140 "message": "Internal error" 00:15:10.140 } 00:15:10.140 06:07:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:15:10.140 06:07:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:10.140 06:07:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:10.140 06:07:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:10.140 06:07:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 83701 00:15:10.140 06:07:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 83701 ']' 00:15:10.140 06:07:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 83701 00:15:10.140 06:07:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:15:10.140 06:07:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:10.140 06:07:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83701 00:15:10.399 06:07:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:15:10.399 06:07:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:15:10.399 killing process with pid 83701 00:15:10.399 06:07:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83701' 00:15:10.399 06:07:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 83701 00:15:10.399 06:07:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 83701 00:15:10.399 06:07:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.yw2ZFMTuJx 00:15:10.399 06:07:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:15:10.399 06:07:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:15:10.399 06:07:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:10.399 06:07:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:10.399 06:07:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:10.399 06:07:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # nvmfpid=83757 00:15:10.399 06:07:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # waitforlisten 83757 00:15:10.399 06:07:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 83757 ']' 00:15:10.399 06:07:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:10.399 06:07:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:10.399 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:10.399 06:07:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:10.399 06:07:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:10.399 06:07:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:10.399 [2024-10-01 06:07:35.975811] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:15:10.399 [2024-10-01 06:07:35.975892] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:10.658 [2024-10-01 06:07:36.107114] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:10.658 [2024-10-01 06:07:36.143325] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:10.658 [2024-10-01 06:07:36.143593] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:10.658 [2024-10-01 06:07:36.143674] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:10.658 [2024-10-01 06:07:36.143761] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:10.658 [2024-10-01 06:07:36.143855] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:10.658 [2024-10-01 06:07:36.143969] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:15:10.658 [2024-10-01 06:07:36.174260] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:10.658 06:07:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:10.658 06:07:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:15:10.658 06:07:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:15:10.658 06:07:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:10.658 06:07:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:10.917 06:07:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:10.917 06:07:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.yw2ZFMTuJx 00:15:10.917 06:07:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.yw2ZFMTuJx 00:15:10.917 06:07:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:15:11.176 [2024-10-01 06:07:36.579150] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:11.176 06:07:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:15:11.436 06:07:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:15:11.695 [2024-10-01 06:07:37.179359] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:11.695 [2024-10-01 06:07:37.179636] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:11.695 06:07:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:15:11.954 malloc0 00:15:11.954 06:07:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:15:12.213 06:07:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.yw2ZFMTuJx 00:15:12.471 06:07:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:15:12.730 06:07:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=83805 00:15:12.730 06:07:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:15:12.730 06:07:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:12.730 06:07:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 83805 /var/tmp/bdevperf.sock 00:15:12.730 06:07:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 83805 ']' 00:15:12.730 06:07:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:12.730 06:07:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:12.730 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:12.730 06:07:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:12.730 06:07:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:12.730 06:07:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:12.730 [2024-10-01 06:07:38.322728] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:15:12.730 [2024-10-01 06:07:38.322826] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83805 ] 00:15:12.989 [2024-10-01 06:07:38.452523] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:12.989 [2024-10-01 06:07:38.497813] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:15:12.989 [2024-10-01 06:07:38.535111] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:13.248 06:07:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:13.248 06:07:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:15:13.248 06:07:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.yw2ZFMTuJx 00:15:13.248 06:07:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:15:13.816 [2024-10-01 06:07:39.128785] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:13.816 TLSTESTn1 00:15:13.816 06:07:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:15:14.076 06:07:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:15:14.076 "subsystems": [ 00:15:14.076 { 00:15:14.076 "subsystem": "keyring", 00:15:14.076 "config": [ 00:15:14.076 { 00:15:14.076 "method": "keyring_file_add_key", 00:15:14.076 "params": { 00:15:14.076 "name": "key0", 00:15:14.076 "path": "/tmp/tmp.yw2ZFMTuJx" 00:15:14.076 } 00:15:14.076 } 00:15:14.076 ] 00:15:14.076 }, 00:15:14.076 { 00:15:14.076 "subsystem": "iobuf", 00:15:14.076 "config": [ 00:15:14.076 { 00:15:14.076 "method": "iobuf_set_options", 00:15:14.076 "params": { 00:15:14.076 "small_pool_count": 8192, 00:15:14.076 "large_pool_count": 1024, 00:15:14.076 "small_bufsize": 8192, 00:15:14.076 "large_bufsize": 135168 00:15:14.076 } 00:15:14.076 } 00:15:14.076 ] 00:15:14.076 }, 00:15:14.076 { 00:15:14.076 "subsystem": "sock", 00:15:14.076 "config": [ 00:15:14.076 { 00:15:14.076 "method": "sock_set_default_impl", 00:15:14.076 "params": { 00:15:14.076 "impl_name": "uring" 00:15:14.076 } 00:15:14.076 }, 00:15:14.076 { 00:15:14.076 "method": "sock_impl_set_options", 00:15:14.076 "params": { 00:15:14.076 "impl_name": "ssl", 00:15:14.076 "recv_buf_size": 4096, 00:15:14.076 "send_buf_size": 4096, 00:15:14.076 "enable_recv_pipe": true, 00:15:14.076 "enable_quickack": false, 00:15:14.076 "enable_placement_id": 0, 00:15:14.076 "enable_zerocopy_send_server": true, 00:15:14.076 "enable_zerocopy_send_client": false, 00:15:14.076 "zerocopy_threshold": 0, 00:15:14.076 "tls_version": 0, 00:15:14.076 "enable_ktls": false 00:15:14.076 } 00:15:14.076 }, 00:15:14.076 { 00:15:14.076 "method": "sock_impl_set_options", 00:15:14.076 "params": { 00:15:14.076 "impl_name": "posix", 00:15:14.076 "recv_buf_size": 2097152, 00:15:14.076 "send_buf_size": 2097152, 00:15:14.076 "enable_recv_pipe": true, 00:15:14.076 "enable_quickack": false, 00:15:14.076 "enable_placement_id": 0, 00:15:14.076 "enable_zerocopy_send_server": true, 00:15:14.076 "enable_zerocopy_send_client": false, 00:15:14.076 "zerocopy_threshold": 0, 00:15:14.076 "tls_version": 0, 00:15:14.076 "enable_ktls": false 00:15:14.076 } 00:15:14.076 }, 00:15:14.076 { 00:15:14.076 "method": "sock_impl_set_options", 00:15:14.076 "params": { 00:15:14.076 "impl_name": "uring", 00:15:14.076 "recv_buf_size": 2097152, 00:15:14.076 "send_buf_size": 2097152, 00:15:14.076 "enable_recv_pipe": true, 00:15:14.076 "enable_quickack": false, 00:15:14.076 "enable_placement_id": 0, 00:15:14.076 "enable_zerocopy_send_server": false, 00:15:14.076 "enable_zerocopy_send_client": false, 00:15:14.076 "zerocopy_threshold": 0, 00:15:14.076 "tls_version": 0, 00:15:14.076 "enable_ktls": false 00:15:14.076 } 00:15:14.076 } 00:15:14.076 ] 00:15:14.076 }, 00:15:14.076 { 00:15:14.076 "subsystem": "vmd", 00:15:14.076 "config": [] 00:15:14.076 }, 00:15:14.076 { 00:15:14.076 "subsystem": "accel", 00:15:14.076 "config": [ 00:15:14.076 { 00:15:14.076 "method": "accel_set_options", 00:15:14.076 "params": { 00:15:14.076 "small_cache_size": 128, 00:15:14.076 "large_cache_size": 16, 00:15:14.076 "task_count": 2048, 00:15:14.076 "sequence_count": 2048, 00:15:14.076 "buf_count": 2048 00:15:14.076 } 00:15:14.076 } 00:15:14.076 ] 00:15:14.076 }, 00:15:14.076 { 00:15:14.076 "subsystem": "bdev", 00:15:14.076 "config": [ 00:15:14.076 { 00:15:14.076 "method": "bdev_set_options", 00:15:14.076 "params": { 00:15:14.076 "bdev_io_pool_size": 65535, 00:15:14.076 "bdev_io_cache_size": 256, 00:15:14.076 "bdev_auto_examine": true, 00:15:14.076 "iobuf_small_cache_size": 128, 00:15:14.076 "iobuf_large_cache_size": 16 00:15:14.076 } 00:15:14.076 }, 00:15:14.076 { 00:15:14.076 "method": "bdev_raid_set_options", 00:15:14.076 "params": { 00:15:14.076 "process_window_size_kb": 1024, 00:15:14.076 "process_max_bandwidth_mb_sec": 0 00:15:14.076 } 00:15:14.076 }, 00:15:14.076 { 00:15:14.076 "method": "bdev_iscsi_set_options", 00:15:14.076 "params": { 00:15:14.076 "timeout_sec": 30 00:15:14.076 } 00:15:14.076 }, 00:15:14.076 { 00:15:14.076 "method": "bdev_nvme_set_options", 00:15:14.076 "params": { 00:15:14.076 "action_on_timeout": "none", 00:15:14.076 "timeout_us": 0, 00:15:14.076 "timeout_admin_us": 0, 00:15:14.076 "keep_alive_timeout_ms": 10000, 00:15:14.076 "arbitration_burst": 0, 00:15:14.076 "low_priority_weight": 0, 00:15:14.076 "medium_priority_weight": 0, 00:15:14.076 "high_priority_weight": 0, 00:15:14.076 "nvme_adminq_poll_period_us": 10000, 00:15:14.076 "nvme_ioq_poll_period_us": 0, 00:15:14.076 "io_queue_requests": 0, 00:15:14.076 "delay_cmd_submit": true, 00:15:14.076 "transport_retry_count": 4, 00:15:14.076 "bdev_retry_count": 3, 00:15:14.076 "transport_ack_timeout": 0, 00:15:14.076 "ctrlr_loss_timeout_sec": 0, 00:15:14.076 "reconnect_delay_sec": 0, 00:15:14.076 "fast_io_fail_timeout_sec": 0, 00:15:14.076 "disable_auto_failback": false, 00:15:14.076 "generate_uuids": false, 00:15:14.076 "transport_tos": 0, 00:15:14.076 "nvme_error_stat": false, 00:15:14.076 "rdma_srq_size": 0, 00:15:14.076 "io_path_stat": false, 00:15:14.076 "allow_accel_sequence": false, 00:15:14.076 "rdma_max_cq_size": 0, 00:15:14.076 "rdma_cm_event_timeout_ms": 0, 00:15:14.076 "dhchap_digests": [ 00:15:14.076 "sha256", 00:15:14.076 "sha384", 00:15:14.076 "sha512" 00:15:14.076 ], 00:15:14.076 "dhchap_dhgroups": [ 00:15:14.076 "null", 00:15:14.076 "ffdhe2048", 00:15:14.076 "ffdhe3072", 00:15:14.076 "ffdhe4096", 00:15:14.076 "ffdhe6144", 00:15:14.076 "ffdhe8192" 00:15:14.076 ] 00:15:14.076 } 00:15:14.076 }, 00:15:14.077 { 00:15:14.077 "method": "bdev_nvme_set_hotplug", 00:15:14.077 "params": { 00:15:14.077 "period_us": 100000, 00:15:14.077 "enable": false 00:15:14.077 } 00:15:14.077 }, 00:15:14.077 { 00:15:14.077 "method": "bdev_malloc_create", 00:15:14.077 "params": { 00:15:14.077 "name": "malloc0", 00:15:14.077 "num_blocks": 8192, 00:15:14.077 "block_size": 4096, 00:15:14.077 "physical_block_size": 4096, 00:15:14.077 "uuid": "e0458b4d-083e-4d1d-be9d-b93b701f57f5", 00:15:14.077 "optimal_io_boundary": 0, 00:15:14.077 "md_size": 0, 00:15:14.077 "dif_type": 0, 00:15:14.077 "dif_is_head_of_md": false, 00:15:14.077 "dif_pi_format": 0 00:15:14.077 } 00:15:14.077 }, 00:15:14.077 { 00:15:14.077 "method": "bdev_wait_for_examine" 00:15:14.077 } 00:15:14.077 ] 00:15:14.077 }, 00:15:14.077 { 00:15:14.077 "subsystem": "nbd", 00:15:14.077 "config": [] 00:15:14.077 }, 00:15:14.077 { 00:15:14.077 "subsystem": "scheduler", 00:15:14.077 "config": [ 00:15:14.077 { 00:15:14.077 "method": "framework_set_scheduler", 00:15:14.077 "params": { 00:15:14.077 "name": "static" 00:15:14.077 } 00:15:14.077 } 00:15:14.077 ] 00:15:14.077 }, 00:15:14.077 { 00:15:14.077 "subsystem": "nvmf", 00:15:14.077 "config": [ 00:15:14.077 { 00:15:14.077 "method": "nvmf_set_config", 00:15:14.077 "params": { 00:15:14.077 "discovery_filter": "match_any", 00:15:14.077 "admin_cmd_passthru": { 00:15:14.077 "identify_ctrlr": false 00:15:14.077 }, 00:15:14.077 "dhchap_digests": [ 00:15:14.077 "sha256", 00:15:14.077 "sha384", 00:15:14.077 "sha512" 00:15:14.077 ], 00:15:14.077 "dhchap_dhgroups": [ 00:15:14.077 "null", 00:15:14.077 "ffdhe2048", 00:15:14.077 "ffdhe3072", 00:15:14.077 "ffdhe4096", 00:15:14.077 "ffdhe6144", 00:15:14.077 "ffdhe8192" 00:15:14.077 ] 00:15:14.077 } 00:15:14.077 }, 00:15:14.077 { 00:15:14.077 "method": "nvmf_set_max_subsystems", 00:15:14.077 "params": { 00:15:14.077 "max_subsystems": 1024 00:15:14.077 } 00:15:14.077 }, 00:15:14.077 { 00:15:14.077 "method": "nvmf_set_crdt", 00:15:14.077 "params": { 00:15:14.077 "crdt1": 0, 00:15:14.077 "crdt2": 0, 00:15:14.077 "crdt3": 0 00:15:14.077 } 00:15:14.077 }, 00:15:14.077 { 00:15:14.077 "method": "nvmf_create_transport", 00:15:14.077 "params": { 00:15:14.077 "trtype": "TCP", 00:15:14.077 "max_queue_depth": 128, 00:15:14.077 "max_io_qpairs_per_ctrlr": 127, 00:15:14.077 "in_capsule_data_size": 4096, 00:15:14.077 "max_io_size": 131072, 00:15:14.077 "io_unit_size": 131072, 00:15:14.077 "max_aq_depth": 128, 00:15:14.077 "num_shared_buffers": 511, 00:15:14.077 "buf_cache_size": 4294967295, 00:15:14.077 "dif_insert_or_strip": false, 00:15:14.077 "zcopy": false, 00:15:14.077 "c2h_success": false, 00:15:14.077 "sock_priority": 0, 00:15:14.077 "abort_timeout_sec": 1, 00:15:14.077 "ack_timeout": 0, 00:15:14.077 "data_wr_pool_size": 0 00:15:14.077 } 00:15:14.077 }, 00:15:14.077 { 00:15:14.077 "method": "nvmf_create_subsystem", 00:15:14.077 "params": { 00:15:14.077 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:14.077 "allow_any_host": false, 00:15:14.077 "serial_number": "SPDK00000000000001", 00:15:14.077 "model_number": "SPDK bdev Controller", 00:15:14.077 "max_namespaces": 10, 00:15:14.077 "min_cntlid": 1, 00:15:14.077 "max_cntlid": 65519, 00:15:14.077 "ana_reporting": false 00:15:14.077 } 00:15:14.077 }, 00:15:14.077 { 00:15:14.077 "method": "nvmf_subsystem_add_host", 00:15:14.077 "params": { 00:15:14.077 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:14.077 "host": "nqn.2016-06.io.spdk:host1", 00:15:14.077 "psk": "key0" 00:15:14.077 } 00:15:14.077 }, 00:15:14.077 { 00:15:14.077 "method": "nvmf_subsystem_add_ns", 00:15:14.077 "params": { 00:15:14.077 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:14.077 "namespace": { 00:15:14.077 "nsid": 1, 00:15:14.077 "bdev_name": "malloc0", 00:15:14.077 "nguid": "E0458B4D083E4D1DBE9DB93B701F57F5", 00:15:14.077 "uuid": "e0458b4d-083e-4d1d-be9d-b93b701f57f5", 00:15:14.077 "no_auto_visible": false 00:15:14.077 } 00:15:14.077 } 00:15:14.077 }, 00:15:14.077 { 00:15:14.077 "method": "nvmf_subsystem_add_listener", 00:15:14.077 "params": { 00:15:14.077 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:14.077 "listen_address": { 00:15:14.077 "trtype": "TCP", 00:15:14.077 "adrfam": "IPv4", 00:15:14.077 "traddr": "10.0.0.3", 00:15:14.077 "trsvcid": "4420" 00:15:14.077 }, 00:15:14.077 "secure_channel": true 00:15:14.077 } 00:15:14.077 } 00:15:14.077 ] 00:15:14.077 } 00:15:14.077 ] 00:15:14.077 }' 00:15:14.077 06:07:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:15:14.337 06:07:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:15:14.337 "subsystems": [ 00:15:14.337 { 00:15:14.337 "subsystem": "keyring", 00:15:14.337 "config": [ 00:15:14.337 { 00:15:14.337 "method": "keyring_file_add_key", 00:15:14.337 "params": { 00:15:14.337 "name": "key0", 00:15:14.337 "path": "/tmp/tmp.yw2ZFMTuJx" 00:15:14.337 } 00:15:14.337 } 00:15:14.337 ] 00:15:14.337 }, 00:15:14.337 { 00:15:14.337 "subsystem": "iobuf", 00:15:14.337 "config": [ 00:15:14.337 { 00:15:14.337 "method": "iobuf_set_options", 00:15:14.337 "params": { 00:15:14.337 "small_pool_count": 8192, 00:15:14.337 "large_pool_count": 1024, 00:15:14.337 "small_bufsize": 8192, 00:15:14.337 "large_bufsize": 135168 00:15:14.337 } 00:15:14.337 } 00:15:14.337 ] 00:15:14.337 }, 00:15:14.337 { 00:15:14.337 "subsystem": "sock", 00:15:14.337 "config": [ 00:15:14.337 { 00:15:14.337 "method": "sock_set_default_impl", 00:15:14.337 "params": { 00:15:14.337 "impl_name": "uring" 00:15:14.337 } 00:15:14.337 }, 00:15:14.337 { 00:15:14.337 "method": "sock_impl_set_options", 00:15:14.337 "params": { 00:15:14.337 "impl_name": "ssl", 00:15:14.337 "recv_buf_size": 4096, 00:15:14.337 "send_buf_size": 4096, 00:15:14.337 "enable_recv_pipe": true, 00:15:14.337 "enable_quickack": false, 00:15:14.337 "enable_placement_id": 0, 00:15:14.337 "enable_zerocopy_send_server": true, 00:15:14.337 "enable_zerocopy_send_client": false, 00:15:14.337 "zerocopy_threshold": 0, 00:15:14.337 "tls_version": 0, 00:15:14.337 "enable_ktls": false 00:15:14.337 } 00:15:14.337 }, 00:15:14.337 { 00:15:14.337 "method": "sock_impl_set_options", 00:15:14.337 "params": { 00:15:14.337 "impl_name": "posix", 00:15:14.337 "recv_buf_size": 2097152, 00:15:14.337 "send_buf_size": 2097152, 00:15:14.337 "enable_recv_pipe": true, 00:15:14.337 "enable_quickack": false, 00:15:14.337 "enable_placement_id": 0, 00:15:14.337 "enable_zerocopy_send_server": true, 00:15:14.337 "enable_zerocopy_send_client": false, 00:15:14.337 "zerocopy_threshold": 0, 00:15:14.337 "tls_version": 0, 00:15:14.337 "enable_ktls": false 00:15:14.337 } 00:15:14.337 }, 00:15:14.337 { 00:15:14.337 "method": "sock_impl_set_options", 00:15:14.337 "params": { 00:15:14.337 "impl_name": "uring", 00:15:14.337 "recv_buf_size": 2097152, 00:15:14.337 "send_buf_size": 2097152, 00:15:14.337 "enable_recv_pipe": true, 00:15:14.337 "enable_quickack": false, 00:15:14.337 "enable_placement_id": 0, 00:15:14.337 "enable_zerocopy_send_server": false, 00:15:14.337 "enable_zerocopy_send_client": false, 00:15:14.337 "zerocopy_threshold": 0, 00:15:14.337 "tls_version": 0, 00:15:14.337 "enable_ktls": false 00:15:14.337 } 00:15:14.337 } 00:15:14.337 ] 00:15:14.337 }, 00:15:14.337 { 00:15:14.337 "subsystem": "vmd", 00:15:14.337 "config": [] 00:15:14.337 }, 00:15:14.337 { 00:15:14.337 "subsystem": "accel", 00:15:14.337 "config": [ 00:15:14.337 { 00:15:14.337 "method": "accel_set_options", 00:15:14.337 "params": { 00:15:14.337 "small_cache_size": 128, 00:15:14.337 "large_cache_size": 16, 00:15:14.337 "task_count": 2048, 00:15:14.337 "sequence_count": 2048, 00:15:14.337 "buf_count": 2048 00:15:14.337 } 00:15:14.337 } 00:15:14.337 ] 00:15:14.337 }, 00:15:14.337 { 00:15:14.337 "subsystem": "bdev", 00:15:14.337 "config": [ 00:15:14.337 { 00:15:14.337 "method": "bdev_set_options", 00:15:14.337 "params": { 00:15:14.337 "bdev_io_pool_size": 65535, 00:15:14.337 "bdev_io_cache_size": 256, 00:15:14.337 "bdev_auto_examine": true, 00:15:14.337 "iobuf_small_cache_size": 128, 00:15:14.337 "iobuf_large_cache_size": 16 00:15:14.337 } 00:15:14.337 }, 00:15:14.337 { 00:15:14.337 "method": "bdev_raid_set_options", 00:15:14.337 "params": { 00:15:14.337 "process_window_size_kb": 1024, 00:15:14.337 "process_max_bandwidth_mb_sec": 0 00:15:14.337 } 00:15:14.337 }, 00:15:14.337 { 00:15:14.337 "method": "bdev_iscsi_set_options", 00:15:14.337 "params": { 00:15:14.337 "timeout_sec": 30 00:15:14.337 } 00:15:14.337 }, 00:15:14.337 { 00:15:14.337 "method": "bdev_nvme_set_options", 00:15:14.337 "params": { 00:15:14.337 "action_on_timeout": "none", 00:15:14.337 "timeout_us": 0, 00:15:14.337 "timeout_admin_us": 0, 00:15:14.337 "keep_alive_timeout_ms": 10000, 00:15:14.337 "arbitration_burst": 0, 00:15:14.338 "low_priority_weight": 0, 00:15:14.338 "medium_priority_weight": 0, 00:15:14.338 "high_priority_weight": 0, 00:15:14.338 "nvme_adminq_poll_period_us": 10000, 00:15:14.338 "nvme_ioq_poll_period_us": 0, 00:15:14.338 "io_queue_requests": 512, 00:15:14.338 "delay_cmd_submit": true, 00:15:14.338 "transport_retry_count": 4, 00:15:14.338 "bdev_retry_count": 3, 00:15:14.338 "transport_ack_timeout": 0, 00:15:14.338 "ctrlr_loss_timeout_sec": 0, 00:15:14.338 "reconnect_delay_sec": 0, 00:15:14.338 "fast_io_fail_timeout_sec": 0, 00:15:14.338 "disable_auto_failback": false, 00:15:14.338 "generate_uuids": false, 00:15:14.338 "transport_tos": 0, 00:15:14.338 "nvme_error_stat": false, 00:15:14.338 "rdma_srq_size": 0, 00:15:14.338 "io_path_stat": false, 00:15:14.338 "allow_accel_sequence": false, 00:15:14.338 "rdma_max_cq_size": 0, 00:15:14.338 "rdma_cm_event_timeout_ms": 0, 00:15:14.338 "dhchap_digests": [ 00:15:14.338 "sha256", 00:15:14.338 "sha384", 00:15:14.338 "sha512" 00:15:14.338 ], 00:15:14.338 "dhchap_dhgroups": [ 00:15:14.338 "null", 00:15:14.338 "ffdhe2048", 00:15:14.338 "ffdhe3072", 00:15:14.338 "ffdhe4096", 00:15:14.338 "ffdhe6144", 00:15:14.338 "ffdhe8192" 00:15:14.338 ] 00:15:14.338 } 00:15:14.338 }, 00:15:14.338 { 00:15:14.338 "method": "bdev_nvme_attach_controller", 00:15:14.338 "params": { 00:15:14.338 "name": "TLSTEST", 00:15:14.338 "trtype": "TCP", 00:15:14.338 "adrfam": "IPv4", 00:15:14.338 "traddr": "10.0.0.3", 00:15:14.338 "trsvcid": "4420", 00:15:14.338 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:14.338 "prchk_reftag": false, 00:15:14.338 "prchk_guard": false, 00:15:14.338 "ctrlr_loss_timeout_sec": 0, 00:15:14.338 "reconnect_delay_sec": 0, 00:15:14.338 "fast_io_fail_timeout_sec": 0, 00:15:14.338 "psk": "key0", 00:15:14.338 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:14.338 "hdgst": false, 00:15:14.338 "ddgst": false 00:15:14.338 } 00:15:14.338 }, 00:15:14.338 { 00:15:14.338 "method": "bdev_nvme_set_hotplug", 00:15:14.338 "params": { 00:15:14.338 "period_us": 100000, 00:15:14.338 "enable": false 00:15:14.338 } 00:15:14.338 }, 00:15:14.338 { 00:15:14.338 "method": "bdev_wait_for_examine" 00:15:14.338 } 00:15:14.338 ] 00:15:14.338 }, 00:15:14.338 { 00:15:14.338 "subsystem": "nbd", 00:15:14.338 "config": [] 00:15:14.338 } 00:15:14.338 ] 00:15:14.338 }' 00:15:14.338 06:07:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 83805 00:15:14.338 06:07:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 83805 ']' 00:15:14.338 06:07:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 83805 00:15:14.338 06:07:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:15:14.338 06:07:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:14.338 06:07:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83805 00:15:14.338 06:07:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:15:14.338 06:07:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:15:14.338 killing process with pid 83805 00:15:14.338 06:07:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83805' 00:15:14.338 Received shutdown signal, test time was about 10.000000 seconds 00:15:14.338 00:15:14.338 Latency(us) 00:15:14.338 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:14.338 =================================================================================================================== 00:15:14.338 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:14.338 06:07:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 83805 00:15:14.338 06:07:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 83805 00:15:14.598 06:07:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 83757 00:15:14.598 06:07:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 83757 ']' 00:15:14.598 06:07:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 83757 00:15:14.598 06:07:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:15:14.598 06:07:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:14.598 06:07:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83757 00:15:14.598 06:07:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:15:14.598 killing process with pid 83757 00:15:14.598 06:07:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:15:14.598 06:07:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83757' 00:15:14.598 06:07:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 83757 00:15:14.598 06:07:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 83757 00:15:14.858 06:07:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:15:14.858 06:07:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:15:14.858 06:07:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:14.858 06:07:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:14.858 06:07:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:15:14.858 "subsystems": [ 00:15:14.858 { 00:15:14.858 "subsystem": "keyring", 00:15:14.858 "config": [ 00:15:14.858 { 00:15:14.858 "method": "keyring_file_add_key", 00:15:14.858 "params": { 00:15:14.858 "name": "key0", 00:15:14.858 "path": "/tmp/tmp.yw2ZFMTuJx" 00:15:14.858 } 00:15:14.858 } 00:15:14.858 ] 00:15:14.858 }, 00:15:14.858 { 00:15:14.858 "subsystem": "iobuf", 00:15:14.858 "config": [ 00:15:14.858 { 00:15:14.858 "method": "iobuf_set_options", 00:15:14.858 "params": { 00:15:14.858 "small_pool_count": 8192, 00:15:14.858 "large_pool_count": 1024, 00:15:14.858 "small_bufsize": 8192, 00:15:14.858 "large_bufsize": 135168 00:15:14.858 } 00:15:14.858 } 00:15:14.858 ] 00:15:14.858 }, 00:15:14.858 { 00:15:14.858 "subsystem": "sock", 00:15:14.858 "config": [ 00:15:14.858 { 00:15:14.858 "method": "sock_set_default_impl", 00:15:14.858 "params": { 00:15:14.858 "impl_name": "uring" 00:15:14.858 } 00:15:14.858 }, 00:15:14.858 { 00:15:14.858 "method": "sock_impl_set_options", 00:15:14.858 "params": { 00:15:14.858 "impl_name": "ssl", 00:15:14.858 "recv_buf_size": 4096, 00:15:14.858 "send_buf_size": 4096, 00:15:14.858 "enable_recv_pipe": true, 00:15:14.858 "enable_quickack": false, 00:15:14.858 "enable_placement_id": 0, 00:15:14.858 "enable_zerocopy_send_server": true, 00:15:14.858 "enable_zerocopy_send_client": false, 00:15:14.858 "zerocopy_threshold": 0, 00:15:14.858 "tls_version": 0, 00:15:14.858 "enable_ktls": false 00:15:14.858 } 00:15:14.858 }, 00:15:14.858 { 00:15:14.858 "method": "sock_impl_set_options", 00:15:14.858 "params": { 00:15:14.858 "impl_name": "posix", 00:15:14.858 "recv_buf_size": 2097152, 00:15:14.858 "send_buf_size": 2097152, 00:15:14.858 "enable_recv_pipe": true, 00:15:14.858 "enable_quickack": false, 00:15:14.858 "enable_placement_id": 0, 00:15:14.858 "enable_zerocopy_send_server": true, 00:15:14.858 "enable_zerocopy_send_client": false, 00:15:14.858 "zerocopy_threshold": 0, 00:15:14.858 "tls_version": 0, 00:15:14.858 "enable_ktls": false 00:15:14.858 } 00:15:14.858 }, 00:15:14.858 { 00:15:14.858 "method": "sock_impl_set_options", 00:15:14.858 "params": { 00:15:14.858 "impl_name": "uring", 00:15:14.858 "recv_buf_size": 2097152, 00:15:14.858 "send_buf_size": 2097152, 00:15:14.858 "enable_recv_pipe": true, 00:15:14.858 "enable_quickack": false, 00:15:14.858 "enable_placement_id": 0, 00:15:14.858 "enable_zerocopy_send_server": false, 00:15:14.858 "enable_zerocopy_send_client": false, 00:15:14.858 "zerocopy_threshold": 0, 00:15:14.858 "tls_version": 0, 00:15:14.858 "enable_ktls": false 00:15:14.858 } 00:15:14.858 } 00:15:14.858 ] 00:15:14.858 }, 00:15:14.858 { 00:15:14.858 "subsystem": "vmd", 00:15:14.858 "config": [] 00:15:14.858 }, 00:15:14.858 { 00:15:14.858 "subsystem": "accel", 00:15:14.858 "config": [ 00:15:14.858 { 00:15:14.858 "method": "accel_set_options", 00:15:14.858 "params": { 00:15:14.858 "small_cache_size": 128, 00:15:14.858 "large_cache_size": 16, 00:15:14.858 "task_count": 2048, 00:15:14.858 "sequence_count": 2048, 00:15:14.858 "buf_count": 2048 00:15:14.858 } 00:15:14.858 } 00:15:14.858 ] 00:15:14.858 }, 00:15:14.858 { 00:15:14.858 "subsystem": "bdev", 00:15:14.858 "config": [ 00:15:14.858 { 00:15:14.858 "method": "bdev_set_options", 00:15:14.858 "params": { 00:15:14.858 "bdev_io_pool_size": 65535, 00:15:14.858 "bdev_io_cache_size": 256, 00:15:14.858 "bdev_auto_examine": true, 00:15:14.858 "iobuf_small_cache_size": 128, 00:15:14.858 "iobuf_large_cache_size": 16 00:15:14.858 } 00:15:14.858 }, 00:15:14.858 { 00:15:14.858 "method": "bdev_raid_set_options", 00:15:14.858 "params": { 00:15:14.858 "process_window_size_kb": 1024, 00:15:14.858 "process_max_bandwidth_mb_sec": 0 00:15:14.858 } 00:15:14.858 }, 00:15:14.858 { 00:15:14.859 "method": "bdev_iscsi_set_options", 00:15:14.859 "params": { 00:15:14.859 "timeout_sec": 30 00:15:14.859 } 00:15:14.859 }, 00:15:14.859 { 00:15:14.859 "method": "bdev_nvme_set_options", 00:15:14.859 "params": { 00:15:14.859 "action_on_timeout": "none", 00:15:14.859 "timeout_us": 0, 00:15:14.859 "timeout_admin_us": 0, 00:15:14.859 "keep_alive_timeout_ms": 10000, 00:15:14.859 "arbitration_burst": 0, 00:15:14.859 "low_priority_weight": 0, 00:15:14.859 "medium_priority_weight": 0, 00:15:14.859 "high_priority_weight": 0, 00:15:14.859 "nvme_adminq_poll_period_us": 10000, 00:15:14.859 "nvme_ioq_poll_period_us": 0, 00:15:14.859 "io_queue_requests": 0, 00:15:14.859 "delay_cmd_submit": true, 00:15:14.859 "transport_retry_count": 4, 00:15:14.859 "bdev_retry_count": 3, 00:15:14.859 "transport_ack_timeout": 0, 00:15:14.859 "ctrlr_loss_timeout_sec": 0, 00:15:14.859 "reconnect_delay_sec": 0, 00:15:14.859 "fast_io_fail_timeout_sec": 0, 00:15:14.859 "disable_auto_failback": false, 00:15:14.859 "generate_uuids": false, 00:15:14.859 "transport_tos": 0, 00:15:14.859 "nvme_error_stat": false, 00:15:14.859 "rdma_srq_size": 0, 00:15:14.859 "io_path_stat": false, 00:15:14.859 "allow_accel_sequence": false, 00:15:14.859 "rdma_max_cq_size": 0, 00:15:14.859 "rdma_cm_event_timeout_ms": 0, 00:15:14.859 "dhchap_digests": [ 00:15:14.859 "sha256", 00:15:14.859 "sha384", 00:15:14.859 "sha512" 00:15:14.859 ], 00:15:14.859 "dhchap_dhgroups": [ 00:15:14.859 "null", 00:15:14.859 "ffdhe2048", 00:15:14.859 "ffdhe3072", 00:15:14.859 "ffdhe4096", 00:15:14.859 "ffdhe6144", 00:15:14.859 "ffdhe8192" 00:15:14.859 ] 00:15:14.859 } 00:15:14.859 }, 00:15:14.859 { 00:15:14.859 "method": "bdev_nvme_set_hotplug", 00:15:14.859 "params": { 00:15:14.859 "period_us": 100000, 00:15:14.859 "enable": false 00:15:14.859 } 00:15:14.859 }, 00:15:14.859 { 00:15:14.859 "method": "bdev_malloc_create", 00:15:14.859 "params": { 00:15:14.859 "name": "malloc0", 00:15:14.859 "num_blocks": 8192, 00:15:14.859 "block_size": 4096, 00:15:14.859 "physical_block_size": 4096, 00:15:14.859 "uuid": "e0458b4d-083e-4d1d-be9d-b93b701f57f5", 00:15:14.859 "optimal_io_boundary": 0, 00:15:14.859 "md_size": 0, 00:15:14.859 "dif_type": 0, 00:15:14.859 "dif_is_head_of_md": false, 00:15:14.859 "dif_pi_format": 0 00:15:14.859 } 00:15:14.859 }, 00:15:14.859 { 00:15:14.859 "method": "bdev_wait_for_examine" 00:15:14.859 } 00:15:14.859 ] 00:15:14.859 }, 00:15:14.859 { 00:15:14.859 "subsystem": "nbd", 00:15:14.859 "config": [] 00:15:14.859 }, 00:15:14.859 { 00:15:14.859 "subsystem": "scheduler", 00:15:14.859 "config": [ 00:15:14.859 { 00:15:14.859 "method": "framework_set_scheduler", 00:15:14.859 "params": { 00:15:14.859 "name": "static" 00:15:14.859 } 00:15:14.859 } 00:15:14.859 ] 00:15:14.859 }, 00:15:14.859 { 00:15:14.859 "subsystem": "nvmf", 00:15:14.859 "config": [ 00:15:14.859 { 00:15:14.859 "method": "nvmf_set_config", 00:15:14.859 "params": { 00:15:14.859 "discovery_filter": "match_any", 00:15:14.859 "admin_cmd_passthru": { 00:15:14.859 "identify_ctrlr": false 00:15:14.859 }, 00:15:14.859 "dhchap_digests": [ 00:15:14.859 "sha256", 00:15:14.859 "sha384", 00:15:14.859 "sha512" 00:15:14.859 ], 00:15:14.859 "dhchap_dhgroups": [ 00:15:14.859 "null", 00:15:14.859 "ffdhe2048", 00:15:14.859 "ffdhe3072", 00:15:14.859 "ffdhe4096", 00:15:14.859 "ffdhe6144", 00:15:14.859 "ffdhe8192" 00:15:14.859 ] 00:15:14.859 } 00:15:14.859 }, 00:15:14.859 { 00:15:14.859 "method": "nvmf_set_max_subsystems", 00:15:14.859 "params": { 00:15:14.859 "max_subsystems": 1024 00:15:14.859 } 00:15:14.859 }, 00:15:14.859 { 00:15:14.859 "method": "nvmf_set_crdt", 00:15:14.859 "params": { 00:15:14.859 "crdt1": 0, 00:15:14.859 "crdt2": 0, 00:15:14.859 "crdt3": 0 00:15:14.859 } 00:15:14.859 }, 00:15:14.859 { 00:15:14.859 "method": "nvmf_create_transport", 00:15:14.859 "params": { 00:15:14.859 "trtype": "TCP", 00:15:14.859 "max_queue_depth": 128, 00:15:14.859 "max_io_qpairs_per_ctrlr": 127, 00:15:14.859 "in_capsule_data_size": 4096, 00:15:14.859 "max_io_size": 131072, 00:15:14.859 "io_unit_size": 131072, 00:15:14.859 "max_aq_depth": 128, 00:15:14.859 "num_shared_buffers": 511, 00:15:14.859 "buf_cache_size": 4294967295, 00:15:14.859 "dif_insert_or_strip": false, 00:15:14.859 "zcopy": false, 00:15:14.859 "c2h_success": false, 00:15:14.859 "sock_priority": 0, 00:15:14.859 "abort_timeout_sec": 1, 00:15:14.859 "ack_timeout": 0, 00:15:14.859 "data_wr_pool_size": 0 00:15:14.859 } 00:15:14.859 }, 00:15:14.859 { 00:15:14.859 "method": "nvmf_create_subsystem", 00:15:14.859 "params": { 00:15:14.859 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:14.859 "allow_any_host": false, 00:15:14.859 "serial_number": "SPDK00000000000001", 00:15:14.859 "model_number": "SPDK bdev Controller", 00:15:14.859 "max_namespaces": 10, 00:15:14.859 "min_cntlid": 1, 00:15:14.859 "max_cntlid": 65519, 00:15:14.859 "ana_reporting": false 00:15:14.859 } 00:15:14.859 }, 00:15:14.859 { 00:15:14.859 "method": "nvmf_subsystem_add_host", 00:15:14.859 "params": { 00:15:14.859 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:14.859 "host": "nqn.2016-06.io.spdk:host1", 00:15:14.859 "psk": "key0" 00:15:14.859 } 00:15:14.859 }, 00:15:14.859 { 00:15:14.859 "method": "nvmf_subsystem_add_ns", 00:15:14.859 "params": { 00:15:14.859 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:14.859 "namespace": { 00:15:14.859 "nsid": 1, 00:15:14.859 "bdev_name": "malloc0", 00:15:14.859 "nguid": "E0458B4D083E4D1DBE9DB93B701F57F5", 00:15:14.859 "uuid": "e0458b4d-083e-4d1d-be9d-b93b701f57f5", 00:15:14.859 "no_auto_visible": false 00:15:14.859 } 00:15:14.859 } 00:15:14.859 }, 00:15:14.859 { 00:15:14.859 "method": "nvmf_subsystem_add_listener", 00:15:14.859 "params": { 00:15:14.859 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:14.859 "listen_address": { 00:15:14.859 "trtype": "TCP", 00:15:14.859 "adrfam": "IPv4", 00:15:14.859 "traddr": "10.0.0.3", 00:15:14.859 "trsvcid": "4420" 00:15:14.859 }, 00:15:14.859 "secure_channel": true 00:15:14.859 } 00:15:14.859 } 00:15:14.859 ] 00:15:14.859 } 00:15:14.859 ] 00:15:14.859 }' 00:15:14.859 06:07:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # nvmfpid=83847 00:15:14.859 06:07:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:15:14.859 06:07:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # waitforlisten 83847 00:15:14.859 06:07:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 83847 ']' 00:15:14.859 06:07:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:14.859 06:07:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:14.859 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:14.859 06:07:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:14.859 06:07:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:14.859 06:07:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:14.859 [2024-10-01 06:07:40.370187] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:15:14.859 [2024-10-01 06:07:40.370371] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:15.119 [2024-10-01 06:07:40.519591] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:15.119 [2024-10-01 06:07:40.552029] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:15.119 [2024-10-01 06:07:40.552090] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:15.119 [2024-10-01 06:07:40.552099] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:15.119 [2024-10-01 06:07:40.552106] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:15.119 [2024-10-01 06:07:40.552112] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:15.119 [2024-10-01 06:07:40.552177] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:15:15.119 [2024-10-01 06:07:40.695531] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:15.378 [2024-10-01 06:07:40.750542] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:15.378 [2024-10-01 06:07:40.787344] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:15.378 [2024-10-01 06:07:40.787707] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:15.963 06:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:15.963 06:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:15:15.963 06:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:15:15.963 06:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:15.963 06:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:15.963 06:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:15.963 06:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=83879 00:15:15.963 06:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 83879 /var/tmp/bdevperf.sock 00:15:15.963 06:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 83879 ']' 00:15:15.963 06:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:15.964 06:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:15:15.964 06:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:15.964 06:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:15.964 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:15.964 06:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:15.964 06:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:15:15.964 "subsystems": [ 00:15:15.964 { 00:15:15.964 "subsystem": "keyring", 00:15:15.964 "config": [ 00:15:15.964 { 00:15:15.964 "method": "keyring_file_add_key", 00:15:15.964 "params": { 00:15:15.964 "name": "key0", 00:15:15.964 "path": "/tmp/tmp.yw2ZFMTuJx" 00:15:15.964 } 00:15:15.964 } 00:15:15.964 ] 00:15:15.964 }, 00:15:15.964 { 00:15:15.964 "subsystem": "iobuf", 00:15:15.964 "config": [ 00:15:15.964 { 00:15:15.964 "method": "iobuf_set_options", 00:15:15.964 "params": { 00:15:15.964 "small_pool_count": 8192, 00:15:15.964 "large_pool_count": 1024, 00:15:15.964 "small_bufsize": 8192, 00:15:15.964 "large_bufsize": 135168 00:15:15.964 } 00:15:15.964 } 00:15:15.964 ] 00:15:15.964 }, 00:15:15.964 { 00:15:15.964 "subsystem": "sock", 00:15:15.964 "config": [ 00:15:15.964 { 00:15:15.964 "method": "sock_set_default_impl", 00:15:15.964 "params": { 00:15:15.964 "impl_name": "uring" 00:15:15.964 } 00:15:15.964 }, 00:15:15.964 { 00:15:15.964 "method": "sock_impl_set_options", 00:15:15.964 "params": { 00:15:15.964 "impl_name": "ssl", 00:15:15.964 "recv_buf_size": 4096, 00:15:15.964 "send_buf_size": 4096, 00:15:15.964 "enable_recv_pipe": true, 00:15:15.964 "enable_quickack": false, 00:15:15.964 "enable_placement_id": 0, 00:15:15.964 "enable_zerocopy_send_server": true, 00:15:15.964 "enable_zerocopy_send_client": false, 00:15:15.964 "zerocopy_threshold": 0, 00:15:15.964 "tls_version": 0, 00:15:15.964 "enable_ktls": false 00:15:15.964 } 00:15:15.964 }, 00:15:15.964 { 00:15:15.964 "method": "sock_impl_set_options", 00:15:15.964 "params": { 00:15:15.964 "impl_name": "posix", 00:15:15.964 "recv_buf_size": 2097152, 00:15:15.964 "send_buf_size": 2097152, 00:15:15.964 "enable_recv_pipe": true, 00:15:15.964 "enable_quickack": false, 00:15:15.964 "enable_placement_id": 0, 00:15:15.964 "enable_zerocopy_send_server": true, 00:15:15.964 "enable_zerocopy_send_client": false, 00:15:15.964 "zerocopy_threshold": 0, 00:15:15.964 "tls_version": 0, 00:15:15.964 "enable_ktls": false 00:15:15.964 } 00:15:15.964 }, 00:15:15.964 { 00:15:15.964 "method": "sock_impl_set_options", 00:15:15.964 "params": { 00:15:15.964 "impl_name": "uring", 00:15:15.964 "recv_buf_size": 2097152, 00:15:15.964 "send_buf_size": 2097152, 00:15:15.964 "enable_recv_pipe": true, 00:15:15.964 "enable_quickack": false, 00:15:15.964 "enable_placement_id": 0, 00:15:15.964 "enable_zerocopy_send_server": false, 00:15:15.964 "enable_zerocopy_send_client": false, 00:15:15.964 "zerocopy_threshold": 0, 00:15:15.964 "tls_version": 0, 00:15:15.964 "enable_ktls": false 00:15:15.964 } 00:15:15.964 } 00:15:15.964 ] 00:15:15.964 }, 00:15:15.964 { 00:15:15.964 "subsystem": "vmd", 00:15:15.964 "config": [] 00:15:15.964 }, 00:15:15.964 { 00:15:15.964 "subsystem": "accel", 00:15:15.964 "config": [ 00:15:15.964 { 00:15:15.964 "method": "accel_set_options", 00:15:15.964 "params": { 00:15:15.964 "small_cache_size": 128, 00:15:15.964 "large_cache_size": 16, 00:15:15.964 "task_count": 2048, 00:15:15.964 "sequence_count": 2048, 00:15:15.964 "buf_count": 2048 00:15:15.964 } 00:15:15.964 } 00:15:15.964 ] 00:15:15.964 }, 00:15:15.964 { 00:15:15.964 "subsystem": "bdev", 00:15:15.964 "config": [ 00:15:15.964 { 00:15:15.964 "method": "bdev_set_options", 00:15:15.964 "params": { 00:15:15.964 "bdev_io_pool_size": 65535, 00:15:15.964 "bdev_io_cache_size": 256, 00:15:15.964 "bdev_auto_examine": true, 00:15:15.964 "iobuf_small_cache_size": 128, 00:15:15.964 "iobuf_large_cache_size": 16 00:15:15.964 } 00:15:15.964 }, 00:15:15.964 { 00:15:15.964 "method": "bdev_raid_set_options", 00:15:15.964 "params": { 00:15:15.964 "process_window_size_kb": 1024, 00:15:15.964 "process_max_bandwidth_mb_sec": 0 00:15:15.964 } 00:15:15.964 }, 00:15:15.964 { 00:15:15.964 "method": "bdev_iscsi_set_options", 00:15:15.964 "params": { 00:15:15.964 "timeout_sec": 30 00:15:15.964 } 00:15:15.964 }, 00:15:15.964 { 00:15:15.964 "method": "bdev_nvme_set_options", 00:15:15.964 "params": { 00:15:15.964 "action_on_timeout": "none", 00:15:15.964 "timeout_us": 0, 00:15:15.964 "timeout_admin_us": 0, 00:15:15.964 "keep_alive_timeout_ms": 10000, 00:15:15.964 "arbitration_burst": 0, 00:15:15.964 "low_priority_weight": 0, 00:15:15.964 "medium_priority_weight": 0, 00:15:15.964 "high_priority_weight": 0, 00:15:15.964 "nvme_adminq_poll_period_us": 10000, 00:15:15.964 "nvme_ioq_poll_period_us": 0, 00:15:15.964 "io_queue_requests": 512, 00:15:15.964 "delay_cmd_submit": true, 00:15:15.964 "transport_retry_count": 4, 00:15:15.964 "bdev_retry_count": 3, 00:15:15.964 "transport_ack_timeout": 0, 00:15:15.964 "ctrlr_loss_timeout_sec": 0, 00:15:15.964 "reconnect_delay_sec": 0, 00:15:15.964 "fast_io_fail_timeout_sec": 0, 00:15:15.964 "disable_auto_failback": false, 00:15:15.964 "generate_uuids": false, 00:15:15.964 "transport_tos": 0, 00:15:15.964 "nvme_error_stat": false, 00:15:15.964 "rdma_srq_size": 0, 00:15:15.964 "io_path_stat": false, 00:15:15.964 "allow_accel_sequence": false, 00:15:15.964 "rdma_max_cq_size": 0, 00:15:15.964 "rdma_cm_event_timeout_ms": 0, 00:15:15.964 "dhchap_digests": [ 00:15:15.964 "sha256", 00:15:15.964 "sha384", 00:15:15.964 "sha512" 00:15:15.964 ], 00:15:15.964 "dhchap_dhgroups": [ 00:15:15.964 "null", 00:15:15.964 "ffdhe2048", 00:15:15.964 "ffdhe3072", 00:15:15.964 "ffdhe4096", 00:15:15.964 "ffdhe6144", 00:15:15.964 "ffdhe8192" 00:15:15.964 ] 00:15:15.964 } 00:15:15.964 }, 00:15:15.964 { 00:15:15.964 "method": "bdev_nvme_attach_controller", 00:15:15.964 "params": { 00:15:15.964 "name": "TLSTEST", 00:15:15.964 "trtype": "TCP", 00:15:15.964 "adrfam": "IPv4", 00:15:15.964 "traddr": "10.0.0.3", 00:15:15.964 "trsvcid": "4420", 00:15:15.964 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:15.964 "prchk_reftag": false, 00:15:15.964 "prchk_guard": false, 00:15:15.964 "ctrlr_loss_timeout_sec": 0, 00:15:15.964 "reconnect_delay_sec": 0, 00:15:15.964 "fast_io_fail_timeout_sec": 0, 00:15:15.964 "psk": "key0", 00:15:15.964 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:15.964 "hdgst": false, 00:15:15.964 "ddgst": false 00:15:15.964 } 00:15:15.964 }, 00:15:15.964 { 00:15:15.964 "method": "bdev_nvme_set_hotplug", 00:15:15.964 "params": { 00:15:15.964 "period_us": 100000, 00:15:15.964 "enable": false 00:15:15.964 } 00:15:15.964 }, 00:15:15.964 { 00:15:15.964 "method": "bdev_wait_for_examine" 00:15:15.964 } 00:15:15.964 ] 00:15:15.964 }, 00:15:15.964 { 00:15:15.964 "subsystem": "nbd", 00:15:15.964 "config": [] 00:15:15.964 } 00:15:15.964 ] 00:15:15.964 }' 00:15:15.964 06:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:15.964 [2024-10-01 06:07:41.416537] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:15:15.964 [2024-10-01 06:07:41.416802] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83879 ] 00:15:15.964 [2024-10-01 06:07:41.551307] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:16.230 [2024-10-01 06:07:41.588425] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:15:16.230 [2024-10-01 06:07:41.701977] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:16.230 [2024-10-01 06:07:41.732737] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:17.166 06:07:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:17.166 06:07:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:15:17.166 06:07:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:15:17.166 Running I/O for 10 seconds... 00:15:27.434 3662.00 IOPS, 14.30 MiB/s 3954.00 IOPS, 15.45 MiB/s 3989.00 IOPS, 15.58 MiB/s 4093.25 IOPS, 15.99 MiB/s 4151.20 IOPS, 16.22 MiB/s 4175.67 IOPS, 16.31 MiB/s 4206.57 IOPS, 16.43 MiB/s 4213.12 IOPS, 16.46 MiB/s 4211.89 IOPS, 16.45 MiB/s 4199.20 IOPS, 16.40 MiB/s 00:15:27.434 Latency(us) 00:15:27.434 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:27.434 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:15:27.434 Verification LBA range: start 0x0 length 0x2000 00:15:27.434 TLSTESTn1 : 10.02 4205.58 16.43 0.00 0.00 30381.26 5332.25 25380.31 00:15:27.434 =================================================================================================================== 00:15:27.434 Total : 4205.58 16.43 0.00 0.00 30381.26 5332.25 25380.31 00:15:27.434 { 00:15:27.434 "results": [ 00:15:27.434 { 00:15:27.434 "job": "TLSTESTn1", 00:15:27.434 "core_mask": "0x4", 00:15:27.434 "workload": "verify", 00:15:27.434 "status": "finished", 00:15:27.434 "verify_range": { 00:15:27.434 "start": 0, 00:15:27.434 "length": 8192 00:15:27.434 }, 00:15:27.434 "queue_depth": 128, 00:15:27.434 "io_size": 4096, 00:15:27.434 "runtime": 10.015259, 00:15:27.434 "iops": 4205.582701356001, 00:15:27.434 "mibps": 16.42805742717188, 00:15:27.434 "io_failed": 0, 00:15:27.434 "io_timeout": 0, 00:15:27.434 "avg_latency_us": 30381.258713632047, 00:15:27.434 "min_latency_us": 5332.2472727272725, 00:15:27.434 "max_latency_us": 25380.305454545454 00:15:27.434 } 00:15:27.434 ], 00:15:27.434 "core_count": 1 00:15:27.434 } 00:15:27.434 06:07:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:27.434 06:07:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 83879 00:15:27.434 06:07:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 83879 ']' 00:15:27.434 06:07:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 83879 00:15:27.434 06:07:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:15:27.434 06:07:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:27.434 06:07:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83879 00:15:27.434 killing process with pid 83879 00:15:27.434 Received shutdown signal, test time was about 10.000000 seconds 00:15:27.434 00:15:27.434 Latency(us) 00:15:27.434 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:27.434 =================================================================================================================== 00:15:27.434 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:27.434 06:07:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:15:27.434 06:07:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:15:27.434 06:07:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83879' 00:15:27.434 06:07:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 83879 00:15:27.434 06:07:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 83879 00:15:27.434 06:07:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 83847 00:15:27.434 06:07:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 83847 ']' 00:15:27.434 06:07:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 83847 00:15:27.434 06:07:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:15:27.434 06:07:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:27.434 06:07:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83847 00:15:27.434 killing process with pid 83847 00:15:27.434 06:07:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:15:27.434 06:07:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:15:27.434 06:07:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83847' 00:15:27.434 06:07:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 83847 00:15:27.434 06:07:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 83847 00:15:27.434 06:07:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:15:27.434 06:07:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:15:27.434 06:07:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:27.434 06:07:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:27.434 06:07:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # nvmfpid=84019 00:15:27.434 06:07:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:15:27.434 06:07:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # waitforlisten 84019 00:15:27.434 06:07:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 84019 ']' 00:15:27.434 06:07:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:27.434 06:07:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:27.434 06:07:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:27.435 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:27.435 06:07:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:27.435 06:07:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:27.694 [2024-10-01 06:07:53.086742] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:15:27.694 [2024-10-01 06:07:53.088078] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:27.694 [2024-10-01 06:07:53.229518] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:27.694 [2024-10-01 06:07:53.272087] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:27.694 [2024-10-01 06:07:53.272155] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:27.694 [2024-10-01 06:07:53.272177] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:27.694 [2024-10-01 06:07:53.272187] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:27.694 [2024-10-01 06:07:53.272195] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:27.694 [2024-10-01 06:07:53.272240] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:15:27.694 [2024-10-01 06:07:53.305731] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:27.953 06:07:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:27.953 06:07:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:15:27.953 06:07:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:15:27.953 06:07:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:27.953 06:07:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:27.953 06:07:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:27.953 06:07:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.yw2ZFMTuJx 00:15:27.953 06:07:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.yw2ZFMTuJx 00:15:27.953 06:07:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:15:28.212 [2024-10-01 06:07:53.691899] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:28.212 06:07:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:15:28.470 06:07:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:15:28.729 [2024-10-01 06:07:54.264072] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:28.729 [2024-10-01 06:07:54.264275] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:28.729 06:07:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:15:28.987 malloc0 00:15:28.987 06:07:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:15:29.246 06:07:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.yw2ZFMTuJx 00:15:29.505 06:07:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:15:29.764 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:29.764 06:07:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=84067 00:15:29.764 06:07:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:15:29.764 06:07:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:29.764 06:07:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 84067 /var/tmp/bdevperf.sock 00:15:29.764 06:07:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 84067 ']' 00:15:29.764 06:07:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:29.764 06:07:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:29.764 06:07:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:29.764 06:07:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:29.764 06:07:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:29.764 [2024-10-01 06:07:55.346134] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:15:29.764 [2024-10-01 06:07:55.346454] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84067 ] 00:15:30.023 [2024-10-01 06:07:55.480046] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:30.023 [2024-10-01 06:07:55.518464] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:15:30.023 [2024-10-01 06:07:55.551413] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:30.023 06:07:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:30.023 06:07:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:15:30.023 06:07:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.yw2ZFMTuJx 00:15:30.591 06:07:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:15:30.850 [2024-10-01 06:07:56.264989] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:30.850 nvme0n1 00:15:30.850 06:07:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:30.850 Running I/O for 1 seconds... 00:15:32.230 4037.00 IOPS, 15.77 MiB/s 00:15:32.230 Latency(us) 00:15:32.230 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:32.230 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:32.230 Verification LBA range: start 0x0 length 0x2000 00:15:32.230 nvme0n1 : 1.02 4092.75 15.99 0.00 0.00 30976.17 6762.12 24069.59 00:15:32.230 =================================================================================================================== 00:15:32.230 Total : 4092.75 15.99 0.00 0.00 30976.17 6762.12 24069.59 00:15:32.230 { 00:15:32.230 "results": [ 00:15:32.230 { 00:15:32.230 "job": "nvme0n1", 00:15:32.230 "core_mask": "0x2", 00:15:32.230 "workload": "verify", 00:15:32.230 "status": "finished", 00:15:32.230 "verify_range": { 00:15:32.230 "start": 0, 00:15:32.230 "length": 8192 00:15:32.230 }, 00:15:32.230 "queue_depth": 128, 00:15:32.230 "io_size": 4096, 00:15:32.230 "runtime": 1.017652, 00:15:32.230 "iops": 4092.7546941390574, 00:15:32.230 "mibps": 15.987323023980693, 00:15:32.230 "io_failed": 0, 00:15:32.230 "io_timeout": 0, 00:15:32.230 "avg_latency_us": 30976.166513150714, 00:15:32.230 "min_latency_us": 6762.123636363636, 00:15:32.230 "max_latency_us": 24069.585454545453 00:15:32.230 } 00:15:32.230 ], 00:15:32.230 "core_count": 1 00:15:32.230 } 00:15:32.230 06:07:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 84067 00:15:32.230 06:07:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 84067 ']' 00:15:32.230 06:07:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 84067 00:15:32.230 06:07:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:15:32.230 06:07:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:32.230 06:07:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84067 00:15:32.230 killing process with pid 84067 00:15:32.230 Received shutdown signal, test time was about 1.000000 seconds 00:15:32.230 00:15:32.230 Latency(us) 00:15:32.230 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:32.230 =================================================================================================================== 00:15:32.230 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:32.230 06:07:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:15:32.230 06:07:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:15:32.230 06:07:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84067' 00:15:32.230 06:07:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 84067 00:15:32.230 06:07:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 84067 00:15:32.230 06:07:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 84019 00:15:32.230 06:07:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 84019 ']' 00:15:32.230 06:07:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 84019 00:15:32.230 06:07:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:15:32.230 06:07:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:32.230 06:07:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84019 00:15:32.230 killing process with pid 84019 00:15:32.230 06:07:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:32.230 06:07:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:32.230 06:07:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84019' 00:15:32.230 06:07:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 84019 00:15:32.230 06:07:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 84019 00:15:32.489 06:07:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:15:32.489 06:07:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:15:32.489 06:07:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:32.489 06:07:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:32.489 06:07:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # nvmfpid=84111 00:15:32.489 06:07:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:15:32.489 06:07:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # waitforlisten 84111 00:15:32.489 06:07:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 84111 ']' 00:15:32.489 06:07:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:32.489 06:07:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:32.489 06:07:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:32.490 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:32.490 06:07:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:32.490 06:07:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:32.490 [2024-10-01 06:07:57.940588] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:15:32.490 [2024-10-01 06:07:57.940680] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:32.490 [2024-10-01 06:07:58.080025] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:32.749 [2024-10-01 06:07:58.117071] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:32.749 [2024-10-01 06:07:58.117122] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:32.749 [2024-10-01 06:07:58.117147] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:32.749 [2024-10-01 06:07:58.117154] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:32.749 [2024-10-01 06:07:58.117160] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:32.749 [2024-10-01 06:07:58.117185] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:15:32.749 [2024-10-01 06:07:58.147538] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:32.749 06:07:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:32.749 06:07:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:15:32.749 06:07:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:15:32.749 06:07:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:32.749 06:07:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:32.749 06:07:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:32.749 06:07:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:15:32.749 06:07:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.749 06:07:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:32.749 [2024-10-01 06:07:58.244683] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:32.749 malloc0 00:15:32.749 [2024-10-01 06:07:58.280872] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:32.749 [2024-10-01 06:07:58.281321] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:32.749 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:32.749 06:07:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.749 06:07:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=84135 00:15:32.749 06:07:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:15:32.749 06:07:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 84135 /var/tmp/bdevperf.sock 00:15:32.749 06:07:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 84135 ']' 00:15:32.749 06:07:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:32.749 06:07:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:32.749 06:07:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:32.749 06:07:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:32.749 06:07:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:33.007 [2024-10-01 06:07:58.367969] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:15:33.007 [2024-10-01 06:07:58.368312] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84135 ] 00:15:33.007 [2024-10-01 06:07:58.509352] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:33.007 [2024-10-01 06:07:58.553304] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:15:33.007 [2024-10-01 06:07:58.588102] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:33.266 06:07:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:33.266 06:07:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:15:33.266 06:07:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.yw2ZFMTuJx 00:15:33.528 06:07:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:15:33.790 [2024-10-01 06:07:59.247400] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:33.790 nvme0n1 00:15:33.790 06:07:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:34.049 Running I/O for 1 seconds... 00:15:34.988 3840.00 IOPS, 15.00 MiB/s 00:15:34.988 Latency(us) 00:15:34.988 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:34.988 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:34.988 Verification LBA range: start 0x0 length 0x2000 00:15:34.988 nvme0n1 : 1.03 3842.14 15.01 0.00 0.00 32945.25 12213.53 25022.84 00:15:34.988 =================================================================================================================== 00:15:34.988 Total : 3842.14 15.01 0.00 0.00 32945.25 12213.53 25022.84 00:15:34.988 { 00:15:34.988 "results": [ 00:15:34.988 { 00:15:34.988 "job": "nvme0n1", 00:15:34.988 "core_mask": "0x2", 00:15:34.988 "workload": "verify", 00:15:34.988 "status": "finished", 00:15:34.988 "verify_range": { 00:15:34.988 "start": 0, 00:15:34.988 "length": 8192 00:15:34.988 }, 00:15:34.988 "queue_depth": 128, 00:15:34.988 "io_size": 4096, 00:15:34.988 "runtime": 1.032758, 00:15:34.988 "iops": 3842.1392039567836, 00:15:34.988 "mibps": 15.008356265456186, 00:15:34.988 "io_failed": 0, 00:15:34.988 "io_timeout": 0, 00:15:34.988 "avg_latency_us": 32945.24809384165, 00:15:34.988 "min_latency_us": 12213.527272727273, 00:15:34.988 "max_latency_us": 25022.836363636365 00:15:34.988 } 00:15:34.988 ], 00:15:34.988 "core_count": 1 00:15:34.988 } 00:15:34.988 06:08:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:15:34.988 06:08:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.988 06:08:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:35.248 06:08:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.248 06:08:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:15:35.248 "subsystems": [ 00:15:35.248 { 00:15:35.248 "subsystem": "keyring", 00:15:35.248 "config": [ 00:15:35.248 { 00:15:35.248 "method": "keyring_file_add_key", 00:15:35.248 "params": { 00:15:35.248 "name": "key0", 00:15:35.248 "path": "/tmp/tmp.yw2ZFMTuJx" 00:15:35.248 } 00:15:35.248 } 00:15:35.248 ] 00:15:35.248 }, 00:15:35.248 { 00:15:35.248 "subsystem": "iobuf", 00:15:35.248 "config": [ 00:15:35.248 { 00:15:35.248 "method": "iobuf_set_options", 00:15:35.248 "params": { 00:15:35.248 "small_pool_count": 8192, 00:15:35.248 "large_pool_count": 1024, 00:15:35.248 "small_bufsize": 8192, 00:15:35.248 "large_bufsize": 135168 00:15:35.248 } 00:15:35.248 } 00:15:35.248 ] 00:15:35.248 }, 00:15:35.248 { 00:15:35.248 "subsystem": "sock", 00:15:35.248 "config": [ 00:15:35.248 { 00:15:35.248 "method": "sock_set_default_impl", 00:15:35.248 "params": { 00:15:35.248 "impl_name": "uring" 00:15:35.248 } 00:15:35.248 }, 00:15:35.248 { 00:15:35.248 "method": "sock_impl_set_options", 00:15:35.248 "params": { 00:15:35.248 "impl_name": "ssl", 00:15:35.248 "recv_buf_size": 4096, 00:15:35.248 "send_buf_size": 4096, 00:15:35.248 "enable_recv_pipe": true, 00:15:35.248 "enable_quickack": false, 00:15:35.248 "enable_placement_id": 0, 00:15:35.248 "enable_zerocopy_send_server": true, 00:15:35.248 "enable_zerocopy_send_client": false, 00:15:35.248 "zerocopy_threshold": 0, 00:15:35.248 "tls_version": 0, 00:15:35.248 "enable_ktls": false 00:15:35.248 } 00:15:35.248 }, 00:15:35.248 { 00:15:35.248 "method": "sock_impl_set_options", 00:15:35.248 "params": { 00:15:35.248 "impl_name": "posix", 00:15:35.248 "recv_buf_size": 2097152, 00:15:35.248 "send_buf_size": 2097152, 00:15:35.248 "enable_recv_pipe": true, 00:15:35.248 "enable_quickack": false, 00:15:35.248 "enable_placement_id": 0, 00:15:35.248 "enable_zerocopy_send_server": true, 00:15:35.248 "enable_zerocopy_send_client": false, 00:15:35.248 "zerocopy_threshold": 0, 00:15:35.248 "tls_version": 0, 00:15:35.248 "enable_ktls": false 00:15:35.248 } 00:15:35.248 }, 00:15:35.248 { 00:15:35.248 "method": "sock_impl_set_options", 00:15:35.248 "params": { 00:15:35.248 "impl_name": "uring", 00:15:35.248 "recv_buf_size": 2097152, 00:15:35.248 "send_buf_size": 2097152, 00:15:35.248 "enable_recv_pipe": true, 00:15:35.248 "enable_quickack": false, 00:15:35.248 "enable_placement_id": 0, 00:15:35.248 "enable_zerocopy_send_server": false, 00:15:35.248 "enable_zerocopy_send_client": false, 00:15:35.248 "zerocopy_threshold": 0, 00:15:35.248 "tls_version": 0, 00:15:35.248 "enable_ktls": false 00:15:35.248 } 00:15:35.248 } 00:15:35.248 ] 00:15:35.248 }, 00:15:35.248 { 00:15:35.248 "subsystem": "vmd", 00:15:35.248 "config": [] 00:15:35.248 }, 00:15:35.248 { 00:15:35.248 "subsystem": "accel", 00:15:35.248 "config": [ 00:15:35.248 { 00:15:35.248 "method": "accel_set_options", 00:15:35.248 "params": { 00:15:35.248 "small_cache_size": 128, 00:15:35.248 "large_cache_size": 16, 00:15:35.248 "task_count": 2048, 00:15:35.248 "sequence_count": 2048, 00:15:35.248 "buf_count": 2048 00:15:35.248 } 00:15:35.248 } 00:15:35.248 ] 00:15:35.248 }, 00:15:35.248 { 00:15:35.248 "subsystem": "bdev", 00:15:35.248 "config": [ 00:15:35.248 { 00:15:35.248 "method": "bdev_set_options", 00:15:35.248 "params": { 00:15:35.248 "bdev_io_pool_size": 65535, 00:15:35.248 "bdev_io_cache_size": 256, 00:15:35.248 "bdev_auto_examine": true, 00:15:35.248 "iobuf_small_cache_size": 128, 00:15:35.248 "iobuf_large_cache_size": 16 00:15:35.248 } 00:15:35.248 }, 00:15:35.248 { 00:15:35.248 "method": "bdev_raid_set_options", 00:15:35.248 "params": { 00:15:35.248 "process_window_size_kb": 1024, 00:15:35.248 "process_max_bandwidth_mb_sec": 0 00:15:35.248 } 00:15:35.248 }, 00:15:35.248 { 00:15:35.248 "method": "bdev_iscsi_set_options", 00:15:35.248 "params": { 00:15:35.248 "timeout_sec": 30 00:15:35.248 } 00:15:35.248 }, 00:15:35.248 { 00:15:35.248 "method": "bdev_nvme_set_options", 00:15:35.248 "params": { 00:15:35.248 "action_on_timeout": "none", 00:15:35.248 "timeout_us": 0, 00:15:35.248 "timeout_admin_us": 0, 00:15:35.248 "keep_alive_timeout_ms": 10000, 00:15:35.248 "arbitration_burst": 0, 00:15:35.248 "low_priority_weight": 0, 00:15:35.248 "medium_priority_weight": 0, 00:15:35.248 "high_priority_weight": 0, 00:15:35.248 "nvme_adminq_poll_period_us": 10000, 00:15:35.248 "nvme_ioq_poll_period_us": 0, 00:15:35.248 "io_queue_requests": 0, 00:15:35.248 "delay_cmd_submit": true, 00:15:35.248 "transport_retry_count": 4, 00:15:35.248 "bdev_retry_count": 3, 00:15:35.248 "transport_ack_timeout": 0, 00:15:35.248 "ctrlr_loss_timeout_sec": 0, 00:15:35.248 "reconnect_delay_sec": 0, 00:15:35.249 "fast_io_fail_timeout_sec": 0, 00:15:35.249 "disable_auto_failback": false, 00:15:35.249 "generate_uuids": false, 00:15:35.249 "transport_tos": 0, 00:15:35.249 "nvme_error_stat": false, 00:15:35.249 "rdma_srq_size": 0, 00:15:35.249 "io_path_stat": false, 00:15:35.249 "allow_accel_sequence": false, 00:15:35.249 "rdma_max_cq_size": 0, 00:15:35.249 "rdma_cm_event_timeout_ms": 0, 00:15:35.249 "dhchap_digests": [ 00:15:35.249 "sha256", 00:15:35.249 "sha384", 00:15:35.249 "sha512" 00:15:35.249 ], 00:15:35.249 "dhchap_dhgroups": [ 00:15:35.249 "null", 00:15:35.249 "ffdhe2048", 00:15:35.249 "ffdhe3072", 00:15:35.249 "ffdhe4096", 00:15:35.249 "ffdhe6144", 00:15:35.249 "ffdhe8192" 00:15:35.249 ] 00:15:35.249 } 00:15:35.249 }, 00:15:35.249 { 00:15:35.249 "method": "bdev_nvme_set_hotplug", 00:15:35.249 "params": { 00:15:35.249 "period_us": 100000, 00:15:35.249 "enable": false 00:15:35.249 } 00:15:35.249 }, 00:15:35.249 { 00:15:35.249 "method": "bdev_malloc_create", 00:15:35.249 "params": { 00:15:35.249 "name": "malloc0", 00:15:35.249 "num_blocks": 8192, 00:15:35.249 "block_size": 4096, 00:15:35.249 "physical_block_size": 4096, 00:15:35.249 "uuid": "1928db43-3dac-41b6-bd40-a73bbccf8d56", 00:15:35.249 "optimal_io_boundary": 0, 00:15:35.249 "md_size": 0, 00:15:35.249 "dif_type": 0, 00:15:35.249 "dif_is_head_of_md": false, 00:15:35.249 "dif_pi_format": 0 00:15:35.249 } 00:15:35.249 }, 00:15:35.249 { 00:15:35.249 "method": "bdev_wait_for_examine" 00:15:35.249 } 00:15:35.249 ] 00:15:35.249 }, 00:15:35.249 { 00:15:35.249 "subsystem": "nbd", 00:15:35.249 "config": [] 00:15:35.249 }, 00:15:35.249 { 00:15:35.249 "subsystem": "scheduler", 00:15:35.249 "config": [ 00:15:35.249 { 00:15:35.249 "method": "framework_set_scheduler", 00:15:35.249 "params": { 00:15:35.249 "name": "static" 00:15:35.249 } 00:15:35.249 } 00:15:35.249 ] 00:15:35.249 }, 00:15:35.249 { 00:15:35.249 "subsystem": "nvmf", 00:15:35.249 "config": [ 00:15:35.249 { 00:15:35.249 "method": "nvmf_set_config", 00:15:35.249 "params": { 00:15:35.249 "discovery_filter": "match_any", 00:15:35.249 "admin_cmd_passthru": { 00:15:35.249 "identify_ctrlr": false 00:15:35.249 }, 00:15:35.249 "dhchap_digests": [ 00:15:35.249 "sha256", 00:15:35.249 "sha384", 00:15:35.249 "sha512" 00:15:35.249 ], 00:15:35.249 "dhchap_dhgroups": [ 00:15:35.249 "null", 00:15:35.249 "ffdhe2048", 00:15:35.249 "ffdhe3072", 00:15:35.249 "ffdhe4096", 00:15:35.249 "ffdhe6144", 00:15:35.249 "ffdhe8192" 00:15:35.249 ] 00:15:35.249 } 00:15:35.249 }, 00:15:35.249 { 00:15:35.249 "method": "nvmf_set_max_subsystems", 00:15:35.249 "params": { 00:15:35.249 "max_subsystems": 1024 00:15:35.249 } 00:15:35.249 }, 00:15:35.249 { 00:15:35.249 "method": "nvmf_set_crdt", 00:15:35.249 "params": { 00:15:35.249 "crdt1": 0, 00:15:35.249 "crdt2": 0, 00:15:35.249 "crdt3": 0 00:15:35.249 } 00:15:35.249 }, 00:15:35.249 { 00:15:35.249 "method": "nvmf_create_transport", 00:15:35.249 "params": { 00:15:35.249 "trtype": "TCP", 00:15:35.249 "max_queue_depth": 128, 00:15:35.249 "max_io_qpairs_per_ctrlr": 127, 00:15:35.249 "in_capsule_data_size": 4096, 00:15:35.249 "max_io_size": 131072, 00:15:35.249 "io_unit_size": 131072, 00:15:35.249 "max_aq_depth": 128, 00:15:35.249 "num_shared_buffers": 511, 00:15:35.249 "buf_cache_size": 4294967295, 00:15:35.249 "dif_insert_or_strip": false, 00:15:35.249 "zcopy": false, 00:15:35.249 "c2h_success": false, 00:15:35.249 "sock_priority": 0, 00:15:35.249 "abort_timeout_sec": 1, 00:15:35.249 "ack_timeout": 0, 00:15:35.249 "data_wr_pool_size": 0 00:15:35.249 } 00:15:35.249 }, 00:15:35.249 { 00:15:35.249 "method": "nvmf_create_subsystem", 00:15:35.249 "params": { 00:15:35.249 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:35.249 "allow_any_host": false, 00:15:35.249 "serial_number": "00000000000000000000", 00:15:35.249 "model_number": "SPDK bdev Controller", 00:15:35.249 "max_namespaces": 32, 00:15:35.249 "min_cntlid": 1, 00:15:35.249 "max_cntlid": 65519, 00:15:35.249 "ana_reporting": false 00:15:35.249 } 00:15:35.249 }, 00:15:35.249 { 00:15:35.249 "method": "nvmf_subsystem_add_host", 00:15:35.249 "params": { 00:15:35.249 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:35.249 "host": "nqn.2016-06.io.spdk:host1", 00:15:35.249 "psk": "key0" 00:15:35.249 } 00:15:35.249 }, 00:15:35.249 { 00:15:35.249 "method": "nvmf_subsystem_add_ns", 00:15:35.249 "params": { 00:15:35.249 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:35.249 "namespace": { 00:15:35.249 "nsid": 1, 00:15:35.249 "bdev_name": "malloc0", 00:15:35.249 "nguid": "1928DB433DAC41B6BD40A73BBCCF8D56", 00:15:35.249 "uuid": "1928db43-3dac-41b6-bd40-a73bbccf8d56", 00:15:35.249 "no_auto_visible": false 00:15:35.249 } 00:15:35.249 } 00:15:35.249 }, 00:15:35.249 { 00:15:35.249 "method": "nvmf_subsystem_add_listener", 00:15:35.249 "params": { 00:15:35.249 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:35.249 "listen_address": { 00:15:35.249 "trtype": "TCP", 00:15:35.249 "adrfam": "IPv4", 00:15:35.249 "traddr": "10.0.0.3", 00:15:35.249 "trsvcid": "4420" 00:15:35.249 }, 00:15:35.249 "secure_channel": false, 00:15:35.249 "sock_impl": "ssl" 00:15:35.249 } 00:15:35.249 } 00:15:35.249 ] 00:15:35.249 } 00:15:35.249 ] 00:15:35.249 }' 00:15:35.249 06:08:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:15:35.509 06:08:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:15:35.509 "subsystems": [ 00:15:35.509 { 00:15:35.509 "subsystem": "keyring", 00:15:35.509 "config": [ 00:15:35.509 { 00:15:35.509 "method": "keyring_file_add_key", 00:15:35.509 "params": { 00:15:35.509 "name": "key0", 00:15:35.509 "path": "/tmp/tmp.yw2ZFMTuJx" 00:15:35.509 } 00:15:35.509 } 00:15:35.509 ] 00:15:35.509 }, 00:15:35.509 { 00:15:35.509 "subsystem": "iobuf", 00:15:35.509 "config": [ 00:15:35.509 { 00:15:35.509 "method": "iobuf_set_options", 00:15:35.509 "params": { 00:15:35.509 "small_pool_count": 8192, 00:15:35.509 "large_pool_count": 1024, 00:15:35.509 "small_bufsize": 8192, 00:15:35.509 "large_bufsize": 135168 00:15:35.509 } 00:15:35.509 } 00:15:35.509 ] 00:15:35.509 }, 00:15:35.509 { 00:15:35.509 "subsystem": "sock", 00:15:35.509 "config": [ 00:15:35.509 { 00:15:35.509 "method": "sock_set_default_impl", 00:15:35.509 "params": { 00:15:35.509 "impl_name": "uring" 00:15:35.509 } 00:15:35.509 }, 00:15:35.509 { 00:15:35.509 "method": "sock_impl_set_options", 00:15:35.509 "params": { 00:15:35.509 "impl_name": "ssl", 00:15:35.509 "recv_buf_size": 4096, 00:15:35.509 "send_buf_size": 4096, 00:15:35.509 "enable_recv_pipe": true, 00:15:35.509 "enable_quickack": false, 00:15:35.509 "enable_placement_id": 0, 00:15:35.509 "enable_zerocopy_send_server": true, 00:15:35.509 "enable_zerocopy_send_client": false, 00:15:35.509 "zerocopy_threshold": 0, 00:15:35.509 "tls_version": 0, 00:15:35.509 "enable_ktls": false 00:15:35.509 } 00:15:35.509 }, 00:15:35.509 { 00:15:35.509 "method": "sock_impl_set_options", 00:15:35.509 "params": { 00:15:35.509 "impl_name": "posix", 00:15:35.509 "recv_buf_size": 2097152, 00:15:35.509 "send_buf_size": 2097152, 00:15:35.509 "enable_recv_pipe": true, 00:15:35.509 "enable_quickack": false, 00:15:35.509 "enable_placement_id": 0, 00:15:35.509 "enable_zerocopy_send_server": true, 00:15:35.509 "enable_zerocopy_send_client": false, 00:15:35.509 "zerocopy_threshold": 0, 00:15:35.509 "tls_version": 0, 00:15:35.509 "enable_ktls": false 00:15:35.509 } 00:15:35.509 }, 00:15:35.509 { 00:15:35.509 "method": "sock_impl_set_options", 00:15:35.509 "params": { 00:15:35.509 "impl_name": "uring", 00:15:35.509 "recv_buf_size": 2097152, 00:15:35.509 "send_buf_size": 2097152, 00:15:35.509 "enable_recv_pipe": true, 00:15:35.509 "enable_quickack": false, 00:15:35.509 "enable_placement_id": 0, 00:15:35.509 "enable_zerocopy_send_server": false, 00:15:35.509 "enable_zerocopy_send_client": false, 00:15:35.509 "zerocopy_threshold": 0, 00:15:35.509 "tls_version": 0, 00:15:35.509 "enable_ktls": false 00:15:35.509 } 00:15:35.509 } 00:15:35.509 ] 00:15:35.509 }, 00:15:35.509 { 00:15:35.509 "subsystem": "vmd", 00:15:35.509 "config": [] 00:15:35.509 }, 00:15:35.509 { 00:15:35.509 "subsystem": "accel", 00:15:35.509 "config": [ 00:15:35.509 { 00:15:35.509 "method": "accel_set_options", 00:15:35.509 "params": { 00:15:35.509 "small_cache_size": 128, 00:15:35.509 "large_cache_size": 16, 00:15:35.509 "task_count": 2048, 00:15:35.509 "sequence_count": 2048, 00:15:35.509 "buf_count": 2048 00:15:35.509 } 00:15:35.509 } 00:15:35.509 ] 00:15:35.509 }, 00:15:35.509 { 00:15:35.510 "subsystem": "bdev", 00:15:35.510 "config": [ 00:15:35.510 { 00:15:35.510 "method": "bdev_set_options", 00:15:35.510 "params": { 00:15:35.510 "bdev_io_pool_size": 65535, 00:15:35.510 "bdev_io_cache_size": 256, 00:15:35.510 "bdev_auto_examine": true, 00:15:35.510 "iobuf_small_cache_size": 128, 00:15:35.510 "iobuf_large_cache_size": 16 00:15:35.510 } 00:15:35.510 }, 00:15:35.510 { 00:15:35.510 "method": "bdev_raid_set_options", 00:15:35.510 "params": { 00:15:35.510 "process_window_size_kb": 1024, 00:15:35.510 "process_max_bandwidth_mb_sec": 0 00:15:35.510 } 00:15:35.510 }, 00:15:35.510 { 00:15:35.510 "method": "bdev_iscsi_set_options", 00:15:35.510 "params": { 00:15:35.510 "timeout_sec": 30 00:15:35.510 } 00:15:35.510 }, 00:15:35.510 { 00:15:35.510 "method": "bdev_nvme_set_options", 00:15:35.510 "params": { 00:15:35.510 "action_on_timeout": "none", 00:15:35.510 "timeout_us": 0, 00:15:35.510 "timeout_admin_us": 0, 00:15:35.510 "keep_alive_timeout_ms": 10000, 00:15:35.510 "arbitration_burst": 0, 00:15:35.510 "low_priority_weight": 0, 00:15:35.510 "medium_priority_weight": 0, 00:15:35.510 "high_priority_weight": 0, 00:15:35.510 "nvme_adminq_poll_period_us": 10000, 00:15:35.510 "nvme_ioq_poll_period_us": 0, 00:15:35.510 "io_queue_requests": 512, 00:15:35.510 "delay_cmd_submit": true, 00:15:35.510 "transport_retry_count": 4, 00:15:35.510 "bdev_retry_count": 3, 00:15:35.510 "transport_ack_timeout": 0, 00:15:35.510 "ctrlr_loss_timeout_sec": 0, 00:15:35.510 "reconnect_delay_sec": 0, 00:15:35.510 "fast_io_fail_timeout_sec": 0, 00:15:35.510 "disable_auto_failback": false, 00:15:35.510 "generate_uuids": false, 00:15:35.510 "transport_tos": 0, 00:15:35.510 "nvme_error_stat": false, 00:15:35.510 "rdma_srq_size": 0, 00:15:35.510 "io_path_stat": false, 00:15:35.510 "allow_accel_sequence": false, 00:15:35.510 "rdma_max_cq_size": 0, 00:15:35.510 "rdma_cm_event_timeout_ms": 0, 00:15:35.510 "dhchap_digests": [ 00:15:35.510 "sha256", 00:15:35.510 "sha384", 00:15:35.510 "sha512" 00:15:35.510 ], 00:15:35.510 "dhchap_dhgroups": [ 00:15:35.510 "null", 00:15:35.510 "ffdhe2048", 00:15:35.510 "ffdhe3072", 00:15:35.510 "ffdhe4096", 00:15:35.510 "ffdhe6144", 00:15:35.510 "ffdhe8192" 00:15:35.510 ] 00:15:35.510 } 00:15:35.510 }, 00:15:35.510 { 00:15:35.510 "method": "bdev_nvme_attach_controller", 00:15:35.510 "params": { 00:15:35.510 "name": "nvme0", 00:15:35.510 "trtype": "TCP", 00:15:35.510 "adrfam": "IPv4", 00:15:35.510 "traddr": "10.0.0.3", 00:15:35.510 "trsvcid": "4420", 00:15:35.510 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:35.510 "prchk_reftag": false, 00:15:35.510 "prchk_guard": false, 00:15:35.510 "ctrlr_loss_timeout_sec": 0, 00:15:35.510 "reconnect_delay_sec": 0, 00:15:35.510 "fast_io_fail_timeout_sec": 0, 00:15:35.510 "psk": "key0", 00:15:35.510 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:35.510 "hdgst": false, 00:15:35.510 "ddgst": false 00:15:35.510 } 00:15:35.510 }, 00:15:35.510 { 00:15:35.510 "method": "bdev_nvme_set_hotplug", 00:15:35.510 "params": { 00:15:35.510 "period_us": 100000, 00:15:35.510 "enable": false 00:15:35.510 } 00:15:35.510 }, 00:15:35.510 { 00:15:35.510 "method": "bdev_enable_histogram", 00:15:35.510 "params": { 00:15:35.510 "name": "nvme0n1", 00:15:35.510 "enable": true 00:15:35.510 } 00:15:35.510 }, 00:15:35.510 { 00:15:35.510 "method": "bdev_wait_for_examine" 00:15:35.510 } 00:15:35.510 ] 00:15:35.510 }, 00:15:35.510 { 00:15:35.510 "subsystem": "nbd", 00:15:35.510 "config": [] 00:15:35.510 } 00:15:35.510 ] 00:15:35.510 }' 00:15:35.510 06:08:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 84135 00:15:35.510 06:08:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 84135 ']' 00:15:35.510 06:08:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 84135 00:15:35.510 06:08:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:15:35.510 06:08:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:35.510 06:08:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84135 00:15:35.510 killing process with pid 84135 00:15:35.510 Received shutdown signal, test time was about 1.000000 seconds 00:15:35.510 00:15:35.510 Latency(us) 00:15:35.510 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:35.510 =================================================================================================================== 00:15:35.510 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:35.510 06:08:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:15:35.510 06:08:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:15:35.510 06:08:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84135' 00:15:35.510 06:08:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 84135 00:15:35.510 06:08:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 84135 00:15:35.770 06:08:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 84111 00:15:35.770 06:08:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 84111 ']' 00:15:35.770 06:08:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 84111 00:15:35.770 06:08:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:15:35.770 06:08:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:35.770 06:08:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84111 00:15:35.770 killing process with pid 84111 00:15:35.770 06:08:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:35.770 06:08:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:35.770 06:08:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84111' 00:15:35.770 06:08:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 84111 00:15:35.770 06:08:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 84111 00:15:35.770 06:08:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:15:35.770 "subsystems": [ 00:15:35.770 { 00:15:35.770 "subsystem": "keyring", 00:15:35.770 "config": [ 00:15:35.770 { 00:15:35.770 "method": "keyring_file_add_key", 00:15:35.770 "params": { 00:15:35.770 "name": "key0", 00:15:35.770 "path": "/tmp/tmp.yw2ZFMTuJx" 00:15:35.770 } 00:15:35.770 } 00:15:35.770 ] 00:15:35.770 }, 00:15:35.770 { 00:15:35.770 "subsystem": "iobuf", 00:15:35.770 "config": [ 00:15:35.770 { 00:15:35.770 "method": "iobuf_set_options", 00:15:35.770 "params": { 00:15:35.770 "small_pool_count": 8192, 00:15:35.770 "large_pool_count": 1024, 00:15:35.770 "small_bufsize": 8192, 00:15:35.770 "large_bufsize": 135168 00:15:35.770 } 00:15:35.770 } 00:15:35.770 ] 00:15:35.770 }, 00:15:35.770 { 00:15:35.771 "subsystem": "sock", 00:15:35.771 "config": [ 00:15:35.771 { 00:15:35.771 "method": "sock_set_default_impl", 00:15:35.771 "params": { 00:15:35.771 "impl_name": "uring" 00:15:35.771 } 00:15:35.771 }, 00:15:35.771 { 00:15:35.771 "method": "sock_impl_set_options", 00:15:35.771 "params": { 00:15:35.771 "impl_name": "ssl", 00:15:35.771 "recv_buf_size": 4096, 00:15:35.771 "send_buf_size": 4096, 00:15:35.771 "enable_recv_pipe": true, 00:15:35.771 "enable_quickack": false, 00:15:35.771 "enable_placement_id": 0, 00:15:35.771 "enable_zerocopy_send_server": true, 00:15:35.771 "enable_zerocopy_send_client": false, 00:15:35.771 "zerocopy_threshold": 0, 00:15:35.771 "tls_version": 0, 00:15:35.771 "enable_ktls": false 00:15:35.771 } 00:15:35.771 }, 00:15:35.771 { 00:15:35.771 "method": "sock_impl_set_options", 00:15:35.771 "params": { 00:15:35.771 "impl_name": "posix", 00:15:35.771 "recv_buf_size": 2097152, 00:15:35.771 "send_buf_size": 2097152, 00:15:35.771 "enable_recv_pipe": true, 00:15:35.771 "enable_quickack": false, 00:15:35.771 "enable_placement_id": 0, 00:15:35.771 "enable_zerocopy_send_server": true, 00:15:35.771 "enable_zerocopy_send_client": false, 00:15:35.771 "zerocopy_threshold": 0, 00:15:35.771 "tls_version": 0, 00:15:35.771 "enable_ktls": false 00:15:35.771 } 00:15:35.771 }, 00:15:35.771 { 00:15:35.771 "method": "sock_impl_set_options", 00:15:35.771 "params": { 00:15:35.771 "impl_name": "uring", 00:15:35.771 "recv_buf_size": 2097152, 00:15:35.771 "send_buf_size": 2097152, 00:15:35.771 "enable_recv_pipe": true, 00:15:35.771 "enable_quickack": false, 00:15:35.771 "enable_placement_id": 0, 00:15:35.771 "enable_zerocopy_send_server": false, 00:15:35.771 "enable_zerocopy_send_client": false, 00:15:35.771 "zerocopy_threshold": 0, 00:15:35.771 "tls_version": 0, 00:15:35.771 "enable_ktls": false 00:15:35.771 } 00:15:35.771 } 00:15:35.771 ] 00:15:35.771 }, 00:15:35.771 { 00:15:35.771 "subsystem": "vmd", 00:15:35.771 "config": [] 00:15:35.771 }, 00:15:35.771 { 00:15:35.771 "subsystem": "accel", 00:15:35.771 "config": [ 00:15:35.771 { 00:15:35.771 "method": "accel_set_options", 00:15:35.771 "params": { 00:15:35.771 "small_cache_size": 128, 00:15:35.771 "large_cache_size": 16, 00:15:35.771 "task_count": 2048, 00:15:35.771 "sequence_count": 2048, 00:15:35.771 "buf_count": 2048 00:15:35.771 } 00:15:35.771 } 00:15:35.771 ] 00:15:35.771 }, 00:15:35.771 { 00:15:35.771 "subsystem": "bdev", 00:15:35.771 "config": [ 00:15:35.771 { 00:15:35.771 "method": "bdev_set_options", 00:15:35.771 "params": { 00:15:35.771 "bdev_io_pool_size": 65535, 00:15:35.771 "bdev_io_cache_size": 256, 00:15:35.771 "bdev_auto_examine": true, 00:15:35.771 "iobuf_small_cache_size": 128, 00:15:35.771 "iobuf_large_cache_size": 16 00:15:35.771 } 00:15:35.771 }, 00:15:35.771 { 00:15:35.771 "method": "bdev_raid_set_options", 00:15:35.771 "params": { 00:15:35.771 "process_window_size_kb": 1024, 00:15:35.771 "process_max_bandwidth_mb_sec": 0 00:15:35.771 } 00:15:35.771 }, 00:15:35.771 { 00:15:35.771 "method": "bdev_iscsi_set_options", 00:15:35.771 "params": { 00:15:35.771 "timeout_sec": 30 00:15:35.771 } 00:15:35.771 }, 00:15:35.771 { 00:15:35.771 "method": "bdev_nvme_set_options", 00:15:35.771 "params": { 00:15:35.771 "action_on_timeout": "none", 00:15:35.771 "timeout_us": 0, 00:15:35.771 "timeout_admin_us": 0, 00:15:35.771 "keep_alive_timeout_ms": 10000, 00:15:35.771 "arbitration_burst": 0, 00:15:35.771 "low_priority_weight": 0, 00:15:35.771 "medium_priority_weight": 0, 00:15:35.771 "high_priority_weight": 0, 00:15:35.771 "nvme_adminq_poll_period_us": 10000, 00:15:35.771 "nvme_ioq_poll_period_us": 0, 00:15:35.771 "io_queue_requests": 0, 00:15:35.771 "delay_cmd_submit": true, 00:15:35.771 "transport_retry_count": 4, 00:15:35.771 "bdev_retry_count": 3, 00:15:35.771 "transport_ack_timeout": 0, 00:15:35.771 "ctrlr_loss_timeout_sec": 0, 00:15:35.771 "reconnect_delay_sec": 0, 00:15:35.771 "fast_io_fail_timeout_sec": 0, 00:15:35.771 "disable_auto_failback": false, 00:15:35.771 "generate_uuids": false, 00:15:35.771 "transport_tos": 0, 00:15:35.771 "nvme_error_stat": false, 00:15:35.771 "rdma_srq_size": 0, 00:15:35.771 "io_path_stat": false, 00:15:35.771 "allow_accel_sequence": false, 00:15:35.771 "rdma_max_cq_size": 0, 00:15:35.771 "rdma_cm_event_timeout_ms": 0, 00:15:35.771 "dhchap_digests": [ 00:15:35.771 "sha256", 00:15:35.771 "sha384", 00:15:35.771 "sha512" 00:15:35.771 ], 00:15:35.771 "dhchap_dhgroups": [ 00:15:35.771 "null", 00:15:35.771 "ffdhe2048", 00:15:35.771 "ffdhe3072", 00:15:35.771 "ffdhe4096", 00:15:35.771 "ffdhe6144", 00:15:35.771 "ffdhe8192" 00:15:35.771 ] 00:15:35.771 } 00:15:35.771 }, 00:15:35.771 { 00:15:35.771 "method": "bdev_nvme_set_hotplug", 00:15:35.771 "params": { 00:15:35.771 "period_us": 100000, 00:15:35.771 "enable": false 00:15:35.771 } 00:15:35.771 }, 00:15:35.771 { 00:15:35.771 "method": "bdev_malloc_create", 00:15:35.771 "params": { 00:15:35.771 "name": "malloc0", 00:15:35.771 "num_blocks": 8192, 00:15:35.771 "block_size": 4096, 00:15:35.771 "physical_block_size": 4096, 00:15:35.771 "uuid": "1928db43-3dac-41b6-bd40-a73bbccf8d56", 00:15:35.771 "optimal_io_boundary": 0, 00:15:35.771 "md_size": 0, 00:15:35.771 "dif_type": 0, 00:15:35.771 "dif_is_head_of_md": false, 00:15:35.771 "dif_pi_format": 0 00:15:35.771 } 00:15:35.771 }, 00:15:35.771 { 00:15:35.771 "method": "bdev_wait_for_examine" 00:15:35.771 } 00:15:35.771 ] 00:15:35.771 }, 00:15:35.771 { 00:15:35.771 "subsystem": "nbd", 00:15:35.771 "config": [] 00:15:35.771 }, 00:15:35.771 { 00:15:35.771 "subsystem": "scheduler", 00:15:35.771 "config": [ 00:15:35.771 { 00:15:35.771 "method": "framework_set_scheduler", 00:15:35.771 "params": { 00:15:35.771 "name": "static" 00:15:35.771 } 00:15:35.771 } 00:15:35.771 ] 00:15:35.771 }, 00:15:35.771 { 00:15:35.771 "subsystem": "nvmf", 00:15:35.771 "config": [ 00:15:35.771 { 00:15:35.771 "method": "nvmf_set_config", 00:15:35.771 "params": { 00:15:35.771 "discovery_filter": "match_any", 00:15:35.771 "admin_cmd_passthru": { 00:15:35.771 "identify_ctrlr": false 00:15:35.771 }, 00:15:35.771 "dhchap_digests": [ 00:15:35.771 "sha256", 00:15:35.771 "sha384", 00:15:35.771 "sha512" 00:15:35.771 ], 00:15:35.771 "dhchap_dhgroups": [ 00:15:35.771 "null", 00:15:35.771 "ffdhe2048", 00:15:35.771 "ffdhe3072", 00:15:35.771 "ffdhe4096", 00:15:35.771 "ffdhe6144", 00:15:35.771 "ffdhe8192" 00:15:35.771 ] 00:15:35.771 } 00:15:35.771 }, 00:15:35.771 { 00:15:35.771 "method": "nvmf_set_max_subsystems", 00:15:35.771 "params": { 00:15:35.771 "max_subsystems": 1024 00:15:35.771 } 00:15:35.771 }, 00:15:35.771 { 00:15:35.771 "method": "nvmf_set_crdt", 00:15:35.771 "params": { 00:15:35.771 "crdt1": 0, 00:15:35.771 "crdt2": 0, 00:15:35.771 "crdt3": 0 00:15:35.771 } 00:15:35.771 }, 00:15:35.771 { 00:15:35.771 "method": "nvmf_create_transport", 00:15:35.771 "params": { 00:15:35.771 "trtype": "TCP", 00:15:35.771 "max_queue_depth": 128, 00:15:35.771 "max_io_qpairs_per_ctrlr": 127, 00:15:35.771 "in_capsule_data_size": 4096, 00:15:35.771 "max_io_size": 131072, 00:15:35.771 "io_unit_size": 131072, 00:15:35.771 "max_aq_depth": 128, 00:15:35.771 "num_shared_buffers": 511, 00:15:35.771 "buf_cache_size": 4294967295, 00:15:35.771 "dif_insert_or_strip": false, 00:15:35.771 "zcopy": false, 00:15:35.771 "c2h_success": false, 00:15:35.771 "sock_priority": 0, 00:15:35.771 "abort_timeout_sec": 1, 00:15:35.771 "ack_timeout": 0, 00:15:35.771 "data_wr_pool_size": 0 00:15:35.771 } 00:15:35.771 }, 00:15:35.771 { 00:15:35.771 "method": "nvmf_create_subsystem", 00:15:35.771 "params": { 00:15:35.771 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:35.771 "allow_any_host": false, 00:15:35.771 "serial_number": "00000000000000000000", 00:15:35.771 "model_number": "SPDK bdev Controller", 00:15:35.771 "max_namespaces": 32, 00:15:35.771 "min_cntlid": 1, 00:15:35.771 "max_cntlid": 65519, 00:15:35.771 "ana_reporting": false 00:15:35.771 } 00:15:35.771 }, 00:15:35.771 { 00:15:35.771 "method": "nvmf_subsystem_add_host", 00:15:35.771 "params": { 00:15:35.771 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:35.771 "host": "nqn.2016-06.io.spdk:host1", 00:15:35.771 "psk": "key0" 00:15:35.771 } 00:15:35.771 }, 00:15:35.771 { 00:15:35.771 "method": "nvmf_subsystem_add_ns", 00:15:35.771 "params": { 00:15:35.771 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:35.771 "namespace": { 00:15:35.771 "nsid": 1, 00:15:35.771 "bdev_name": "malloc0", 00:15:35.771 "nguid": "1928DB433DAC41B6BD40A73BBCCF8D56", 00:15:35.771 "uuid": "1928db43-3dac-41b6-bd40-a73bbccf8d56", 00:15:35.771 "no_auto_visible": false 00:15:35.771 } 00:15:35.771 } 00:15:35.771 }, 00:15:35.771 { 00:15:35.771 "method": "nvmf_subsystem_add_listener", 00:15:35.771 "params": { 00:15:35.771 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:35.771 "listen_address": { 00:15:35.771 "trtype": "TCP", 00:15:35.771 "adrfam": "IPv4", 00:15:35.771 "traddr": "10.0.0.3", 00:15:35.771 "trsvcid": "4420" 00:15:35.771 }, 00:15:35.771 "secure_channel": false, 00:15:35.772 "sock_impl": "ssl" 00:15:35.772 } 00:15:35.772 } 00:15:35.772 ] 00:15:35.772 } 00:15:35.772 ] 00:15:35.772 }' 00:15:35.772 06:08:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:15:35.772 06:08:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:15:35.772 06:08:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:35.772 06:08:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:35.772 06:08:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@505 -- # nvmfpid=84188 00:15:35.772 06:08:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # waitforlisten 84188 00:15:35.772 06:08:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 84188 ']' 00:15:35.772 06:08:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:35.772 06:08:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:35.772 06:08:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:15:35.772 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:35.772 06:08:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:35.772 06:08:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:35.772 06:08:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:36.032 [2024-10-01 06:08:01.413861] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:15:36.032 [2024-10-01 06:08:01.414687] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:36.032 [2024-10-01 06:08:01.551090] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:36.032 [2024-10-01 06:08:01.588473] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:36.032 [2024-10-01 06:08:01.588526] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:36.032 [2024-10-01 06:08:01.588552] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:36.032 [2024-10-01 06:08:01.588559] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:36.032 [2024-10-01 06:08:01.588566] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:36.032 [2024-10-01 06:08:01.588631] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:15:36.290 [2024-10-01 06:08:01.732717] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:36.290 [2024-10-01 06:08:01.788426] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:36.290 [2024-10-01 06:08:01.827063] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:36.290 [2024-10-01 06:08:01.827310] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:36.859 06:08:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:36.859 06:08:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:15:36.859 06:08:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:15:36.859 06:08:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:36.859 06:08:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:36.859 06:08:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:36.859 06:08:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=84216 00:15:36.859 06:08:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 84216 /var/tmp/bdevperf.sock 00:15:36.859 06:08:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 84216 ']' 00:15:36.859 06:08:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:15:36.859 06:08:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:15:36.859 "subsystems": [ 00:15:36.859 { 00:15:36.859 "subsystem": "keyring", 00:15:36.859 "config": [ 00:15:36.859 { 00:15:36.859 "method": "keyring_file_add_key", 00:15:36.859 "params": { 00:15:36.859 "name": "key0", 00:15:36.859 "path": "/tmp/tmp.yw2ZFMTuJx" 00:15:36.859 } 00:15:36.859 } 00:15:36.859 ] 00:15:36.859 }, 00:15:36.859 { 00:15:36.859 "subsystem": "iobuf", 00:15:36.859 "config": [ 00:15:36.859 { 00:15:36.859 "method": "iobuf_set_options", 00:15:36.859 "params": { 00:15:36.859 "small_pool_count": 8192, 00:15:36.859 "large_pool_count": 1024, 00:15:36.859 "small_bufsize": 8192, 00:15:36.859 "large_bufsize": 135168 00:15:36.859 } 00:15:36.859 } 00:15:36.859 ] 00:15:36.859 }, 00:15:36.859 { 00:15:36.859 "subsystem": "sock", 00:15:36.859 "config": [ 00:15:36.859 { 00:15:36.859 "method": "sock_set_default_impl", 00:15:36.859 "params": { 00:15:36.859 "impl_name": "uring" 00:15:36.859 } 00:15:36.859 }, 00:15:36.859 { 00:15:36.859 "method": "sock_impl_set_options", 00:15:36.859 "params": { 00:15:36.859 "impl_name": "ssl", 00:15:36.859 "recv_buf_size": 4096, 00:15:36.859 "send_buf_size": 4096, 00:15:36.859 "enable_recv_pipe": true, 00:15:36.859 "enable_quickack": false, 00:15:36.859 "enable_placement_id": 0, 00:15:36.859 "enable_zerocopy_send_server": true, 00:15:36.859 "enable_zerocopy_send_client": false, 00:15:36.859 "zerocopy_threshold": 0, 00:15:36.859 "tls_version": 0, 00:15:36.859 "enable_ktls": false 00:15:36.859 } 00:15:36.859 }, 00:15:36.859 { 00:15:36.859 "method": "sock_impl_set_options", 00:15:36.859 "params": { 00:15:36.859 "impl_name": "posix", 00:15:36.859 "recv_buf_size": 2097152, 00:15:36.859 "send_buf_size": 2097152, 00:15:36.859 "enable_recv_pipe": true, 00:15:36.859 "enable_quickack": false, 00:15:36.859 "enable_placement_id": 0, 00:15:36.859 "enable_zerocopy_send_server": true, 00:15:36.859 "enable_zerocopy_send_client": false, 00:15:36.859 "zerocopy_threshold": 0, 00:15:36.859 "tls_version": 0, 00:15:36.859 "enable_ktls": false 00:15:36.859 } 00:15:36.859 }, 00:15:36.859 { 00:15:36.859 "method": "sock_impl_set_options", 00:15:36.859 "params": { 00:15:36.859 "impl_name": "uring", 00:15:36.859 "recv_buf_size": 2097152, 00:15:36.859 "send_buf_size": 2097152, 00:15:36.859 "enable_recv_pipe": true, 00:15:36.859 "enable_quickack": false, 00:15:36.859 "enable_placement_id": 0, 00:15:36.859 "enable_zerocopy_send_server": false, 00:15:36.859 "enable_zerocopy_send_client": false, 00:15:36.859 "zerocopy_threshold": 0, 00:15:36.859 "tls_version": 0, 00:15:36.859 "enable_ktls": false 00:15:36.859 } 00:15:36.859 } 00:15:36.859 ] 00:15:36.859 }, 00:15:36.859 { 00:15:36.859 "subsystem": "vmd", 00:15:36.859 "config": [] 00:15:36.859 }, 00:15:36.859 { 00:15:36.859 "subsystem": "accel", 00:15:36.859 "config": [ 00:15:36.859 { 00:15:36.859 "method": "accel_set_options", 00:15:36.859 "params": { 00:15:36.859 "small_cache_size": 128, 00:15:36.859 "large_cache_size": 16, 00:15:36.859 "task_count": 2048, 00:15:36.859 "sequence_count": 2048, 00:15:36.859 "buf_count": 2048 00:15:36.859 } 00:15:36.859 } 00:15:36.859 ] 00:15:36.859 }, 00:15:36.859 { 00:15:36.859 "subsystem": "bdev", 00:15:36.859 "config": [ 00:15:36.859 { 00:15:36.859 "method": "bdev_set_options", 00:15:36.859 "params": { 00:15:36.859 "bdev_io_pool_size": 65535, 00:15:36.859 "bdev_io_cache_size": 256, 00:15:36.859 "bdev_auto_examine": true, 00:15:36.859 "iobuf_small_cache_size": 128, 00:15:36.859 "iobuf_large_cache_size": 16 00:15:36.859 } 00:15:36.859 }, 00:15:36.859 { 00:15:36.859 "method": "bdev_raid_set_options", 00:15:36.859 "params": { 00:15:36.859 "process_window_size_kb": 1024, 00:15:36.859 "process_max_bandwidth_mb_sec": 0 00:15:36.859 } 00:15:36.859 }, 00:15:36.859 { 00:15:36.859 "method": "bdev_iscsi_set_options", 00:15:36.859 "params": { 00:15:36.859 "timeout_sec": 30 00:15:36.859 } 00:15:36.859 }, 00:15:36.859 { 00:15:36.859 "method": "bdev_nvme_set_options", 00:15:36.859 "params": { 00:15:36.859 "action_on_timeout": "none", 00:15:36.859 "timeout_us": 0, 00:15:36.859 "timeout_admin_us": 0, 00:15:36.859 "keep_alive_timeout_ms": 10000, 00:15:36.859 "arbitration_burst": 0, 00:15:36.859 "low_priority_weight": 0, 00:15:36.859 "medium_priority_weight": 0, 00:15:36.859 "high_priority_weight": 0, 00:15:36.859 "nvme_adminq_poll_period_us": 10000, 00:15:36.859 "nvme_ioq_poll_period_us": 0, 00:15:36.859 "io_queue_requests": 512, 00:15:36.859 "delay_cmd_submit": true, 00:15:36.859 "transport_retry_count": 4, 00:15:36.859 "bdev_retry_count": 3, 00:15:36.859 "transport_ack_timeout": 0, 00:15:36.859 "ctrlr_loss_timeout_sec": 0, 00:15:36.859 "reconnect_delay_sec": 0, 00:15:36.859 "fast_io_fail_timeout_sec": 0, 00:15:36.859 "disable_auto_failback": false, 00:15:36.859 "generate_uuids": false, 00:15:36.859 "transport_tos": 0, 00:15:36.859 "nvme_error_stat": false, 00:15:36.859 "rdma_srq_size": 0, 00:15:36.859 "io_path_stat": false, 00:15:36.859 "allow_accel_sequence": false, 00:15:36.860 "rdma_max_cq_size": 0, 00:15:36.860 "rdma_cm_event_timeout_ms": 0, 00:15:36.860 "dhchap_digests": [ 00:15:36.860 "sha256", 00:15:36.860 "sha384", 00:15:36.860 "sha512" 00:15:36.860 ], 00:15:36.860 "dhchap_dhgroups": [ 00:15:36.860 "null", 00:15:36.860 "ffdhe2048", 00:15:36.860 "ffdhe3072", 00:15:36.860 "ffdhe4096", 00:15:36.860 "ffdhe6144", 00:15:36.860 "ffdhe8192" 00:15:36.860 ] 00:15:36.860 } 00:15:36.860 }, 00:15:36.860 { 00:15:36.860 "method": "bdev_nvme_attach_controller", 00:15:36.860 "params": { 00:15:36.860 "name": "nvme0", 00:15:36.860 "trtype": "TCP", 00:15:36.860 "adrfam": "IPv4", 00:15:36.860 "traddr": "10.0.0.3", 00:15:36.860 "trsvcid": "4420", 00:15:36.860 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:36.860 "prchk_reftag": false, 00:15:36.860 "prchk_guard": false, 00:15:36.860 "ctrlr_loss_timeout_sec": 0, 00:15:36.860 "reconnect_delay_sec": 0, 00:15:36.860 "fast_io_fail_timeout_sec": 0, 00:15:36.860 "psk": "key0", 00:15:36.860 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:36.860 "hdgst": false, 00:15:36.860 "ddgst": false 00:15:36.860 } 00:15:36.860 }, 00:15:36.860 { 00:15:36.860 "method": "bdev_nvme_set_hotplug", 00:15:36.860 "params": { 00:15:36.860 "period_us": 100000, 00:15:36.860 "enable": false 00:15:36.860 } 00:15:36.860 }, 00:15:36.860 { 00:15:36.860 "method": "bdev_enable_histogram", 00:15:36.860 "params": { 00:15:36.860 "name": "nvme0n1", 00:15:36.860 "enable": true 00:15:36.860 } 00:15:36.860 }, 00:15:36.860 { 00:15:36.860 "method": "bdev_wait_for_examine" 00:15:36.860 } 00:15:36.860 ] 00:15:36.860 }, 00:15:36.860 { 00:15:36.860 "subsystem": "nbd", 00:15:36.860 "config": [] 00:15:36.860 } 00:15:36.860 ] 00:15:36.860 }' 00:15:36.860 06:08:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:36.860 06:08:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:36.860 06:08:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:36.860 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:36.860 06:08:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:36.860 06:08:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:37.119 [2024-10-01 06:08:02.513524] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:15:37.119 [2024-10-01 06:08:02.514228] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84216 ] 00:15:37.119 [2024-10-01 06:08:02.652350] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:37.119 [2024-10-01 06:08:02.696495] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:15:37.380 [2024-10-01 06:08:02.812359] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:37.380 [2024-10-01 06:08:02.844887] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:38.318 06:08:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:38.318 06:08:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:15:38.318 06:08:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:15:38.318 06:08:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:15:38.318 06:08:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:38.318 06:08:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:38.578 Running I/O for 1 seconds... 00:15:39.516 3968.00 IOPS, 15.50 MiB/s 00:15:39.516 Latency(us) 00:15:39.516 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:39.516 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:39.516 Verification LBA range: start 0x0 length 0x2000 00:15:39.516 nvme0n1 : 1.03 3963.92 15.48 0.00 0.00 31915.60 9889.98 23116.33 00:15:39.516 =================================================================================================================== 00:15:39.516 Total : 3963.92 15.48 0.00 0.00 31915.60 9889.98 23116.33 00:15:39.516 { 00:15:39.516 "results": [ 00:15:39.516 { 00:15:39.516 "job": "nvme0n1", 00:15:39.516 "core_mask": "0x2", 00:15:39.516 "workload": "verify", 00:15:39.516 "status": "finished", 00:15:39.516 "verify_range": { 00:15:39.516 "start": 0, 00:15:39.516 "length": 8192 00:15:39.516 }, 00:15:39.516 "queue_depth": 128, 00:15:39.516 "io_size": 4096, 00:15:39.516 "runtime": 1.03332, 00:15:39.516 "iops": 3963.9221151240663, 00:15:39.516 "mibps": 15.484070762203384, 00:15:39.516 "io_failed": 0, 00:15:39.516 "io_timeout": 0, 00:15:39.516 "avg_latency_us": 31915.6, 00:15:39.516 "min_latency_us": 9889.978181818182, 00:15:39.516 "max_latency_us": 23116.334545454545 00:15:39.516 } 00:15:39.516 ], 00:15:39.516 "core_count": 1 00:15:39.516 } 00:15:39.516 06:08:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:15:39.516 06:08:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:15:39.516 06:08:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:15:39.516 06:08:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@808 -- # type=--id 00:15:39.516 06:08:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@809 -- # id=0 00:15:39.516 06:08:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:15:39.516 06:08:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:15:39.516 06:08:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:15:39.517 06:08:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:15:39.517 06:08:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # for n in $shm_files 00:15:39.517 06:08:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:15:39.517 nvmf_trace.0 00:15:39.775 06:08:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@823 -- # return 0 00:15:39.775 06:08:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 84216 00:15:39.775 06:08:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 84216 ']' 00:15:39.775 06:08:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 84216 00:15:39.775 06:08:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:15:39.776 06:08:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:39.776 06:08:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84216 00:15:39.776 06:08:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:15:39.776 06:08:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:15:39.776 06:08:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84216' 00:15:39.776 killing process with pid 84216 00:15:39.776 Received shutdown signal, test time was about 1.000000 seconds 00:15:39.776 00:15:39.776 Latency(us) 00:15:39.776 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:39.776 =================================================================================================================== 00:15:39.776 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:39.776 06:08:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 84216 00:15:39.776 06:08:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 84216 00:15:39.776 06:08:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:15:39.776 06:08:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # nvmfcleanup 00:15:39.776 06:08:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:15:40.035 06:08:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:40.035 06:08:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:15:40.035 06:08:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:40.035 06:08:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:40.035 rmmod nvme_tcp 00:15:40.035 rmmod nvme_fabrics 00:15:40.035 rmmod nvme_keyring 00:15:40.035 06:08:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:40.035 06:08:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:15:40.035 06:08:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:15:40.035 06:08:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@513 -- # '[' -n 84188 ']' 00:15:40.035 06:08:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@514 -- # killprocess 84188 00:15:40.035 06:08:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 84188 ']' 00:15:40.035 06:08:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 84188 00:15:40.035 06:08:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:15:40.035 06:08:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:40.035 06:08:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84188 00:15:40.035 killing process with pid 84188 00:15:40.035 06:08:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:40.035 06:08:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:40.035 06:08:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84188' 00:15:40.035 06:08:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 84188 00:15:40.035 06:08:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 84188 00:15:40.294 06:08:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:15:40.294 06:08:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:15:40.294 06:08:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:15:40.294 06:08:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:15:40.294 06:08:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@787 -- # iptables-save 00:15:40.294 06:08:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@787 -- # iptables-restore 00:15:40.294 06:08:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:15:40.294 06:08:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:40.294 06:08:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:15:40.294 06:08:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:15:40.294 06:08:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:15:40.294 06:08:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:15:40.294 06:08:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:15:40.294 06:08:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:15:40.294 06:08:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:15:40.294 06:08:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:15:40.294 06:08:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:15:40.294 06:08:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:15:40.294 06:08:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:15:40.294 06:08:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:15:40.294 06:08:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:40.294 06:08:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:40.294 06:08:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@246 -- # remove_spdk_ns 00:15:40.294 06:08:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:40.294 06:08:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:40.294 06:08:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:40.553 06:08:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@300 -- # return 0 00:15:40.553 06:08:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.OlzAw3YFTC /tmp/tmp.lO6MNk3KrJ /tmp/tmp.yw2ZFMTuJx 00:15:40.553 ************************************ 00:15:40.553 END TEST nvmf_tls 00:15:40.553 ************************************ 00:15:40.553 00:15:40.553 real 1m20.434s 00:15:40.553 user 2m11.819s 00:15:40.553 sys 0m26.102s 00:15:40.553 06:08:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:40.553 06:08:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:15:40.553 06:08:05 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:15:40.553 06:08:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:40.553 06:08:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:40.553 06:08:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:40.553 ************************************ 00:15:40.553 START TEST nvmf_fips 00:15:40.553 ************************************ 00:15:40.553 06:08:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:15:40.553 * Looking for test storage... 00:15:40.553 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/fips 00:15:40.553 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:15:40.553 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:15:40.553 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1681 -- # lcov --version 00:15:40.813 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:15:40.813 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:40.813 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:40.813 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:40.813 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:15:40.813 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:15:40.813 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:15:40.813 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:15:40.813 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:15:40.813 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:15:40.813 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:15:40.813 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:40.813 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:15:40.813 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:15:40.813 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:40.813 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:40.813 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:15:40.813 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:15:40.813 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:40.813 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:15:40.813 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:15:40.813 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:15:40.813 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:15:40.813 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:40.813 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:15:40.813 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:15:40.813 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:40.813 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:40.813 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:15:40.813 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:40.813 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:15:40.813 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:40.813 --rc genhtml_branch_coverage=1 00:15:40.813 --rc genhtml_function_coverage=1 00:15:40.813 --rc genhtml_legend=1 00:15:40.813 --rc geninfo_all_blocks=1 00:15:40.813 --rc geninfo_unexecuted_blocks=1 00:15:40.813 00:15:40.813 ' 00:15:40.813 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:15:40.813 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:40.813 --rc genhtml_branch_coverage=1 00:15:40.813 --rc genhtml_function_coverage=1 00:15:40.813 --rc genhtml_legend=1 00:15:40.813 --rc geninfo_all_blocks=1 00:15:40.813 --rc geninfo_unexecuted_blocks=1 00:15:40.813 00:15:40.813 ' 00:15:40.813 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:15:40.813 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:40.813 --rc genhtml_branch_coverage=1 00:15:40.813 --rc genhtml_function_coverage=1 00:15:40.813 --rc genhtml_legend=1 00:15:40.813 --rc geninfo_all_blocks=1 00:15:40.813 --rc geninfo_unexecuted_blocks=1 00:15:40.813 00:15:40.813 ' 00:15:40.813 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:15:40.813 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:40.813 --rc genhtml_branch_coverage=1 00:15:40.813 --rc genhtml_function_coverage=1 00:15:40.813 --rc genhtml_legend=1 00:15:40.813 --rc geninfo_all_blocks=1 00:15:40.813 --rc geninfo_unexecuted_blocks=1 00:15:40.813 00:15:40.813 ' 00:15:40.814 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:40.814 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:15:40.814 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:40.814 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:40.814 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:40.814 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:40.814 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:40.814 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:40.814 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:40.814 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:40.814 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:40.814 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:40.814 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 00:15:40.814 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=a979a798-a221-4879-b3c4-5aaa753fde06 00:15:40.814 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:40.814 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:40.814 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:40.814 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:40.814 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:40.814 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:15:40.814 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:40.814 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:40.814 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:40.814 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:40.814 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:40.814 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:40.814 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:15:40.814 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:40.814 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:15:40.814 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:40.814 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:40.814 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:40.814 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:40.814 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:40.814 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:40.814 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:40.814 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:40.814 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:40.814 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:40.814 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:40.814 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:15:40.814 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:15:40.814 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:15:40.814 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:15:40.814 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:15:40.814 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:15:40.814 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:40.814 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:40.814 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:15:40.814 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:15:40.814 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:15:40.814 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:15:40.814 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:15:40.814 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:15:40.814 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:15:40.814 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:40.814 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:15:40.814 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:15:40.814 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:40.814 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:40.814 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:15:40.814 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:15:40.814 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:15:40.814 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:15:40.814 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:15:40.814 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:15:40.814 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:15:40.814 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:15:40.814 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:15:40.814 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:15:40.814 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:40.814 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:40.814 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:15:40.814 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:40.814 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:15:40.814 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:15:40.814 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:40.814 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:15:40.814 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:15:40.814 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:15:40.814 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:15:40.814 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:15:40.814 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:15:40.814 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:15:40.814 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:40.814 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:15:40.814 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:15:40.814 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:15:40.814 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:15:40.814 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:15:40.814 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:15:40.814 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:15:40.814 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:15:40.814 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:15:40.814 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:15:40.814 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:15:40.814 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:15:40.814 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:15:40.814 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:15:40.814 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:15:40.814 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:15:40.814 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:15:40.814 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:15:40.814 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:15:40.814 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:15:40.814 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:15:40.814 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:15:40.814 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@650 -- # local es=0 00:15:40.815 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # valid_exec_arg openssl md5 /dev/fd/62 00:15:40.815 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@638 -- # local arg=openssl 00:15:40.815 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:40.815 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # type -t openssl 00:15:40.815 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:40.815 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -P openssl 00:15:40.815 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:40.815 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # arg=/usr/bin/openssl 00:15:40.815 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # [[ -x /usr/bin/openssl ]] 00:15:40.815 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # openssl md5 /dev/fd/62 00:15:40.815 Error setting digest 00:15:40.815 40B283F5AC7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:15:40.815 40B283F5AC7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:15:40.815 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # es=1 00:15:40.815 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:40.815 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:40.815 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:40.815 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:15:40.815 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:15:40.815 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:40.815 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@472 -- # prepare_net_devs 00:15:40.815 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@434 -- # local -g is_hw=no 00:15:40.815 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@436 -- # remove_spdk_ns 00:15:40.815 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:40.815 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:40.815 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:40.815 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:15:40.815 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:15:40.815 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:15:40.815 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:15:40.815 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:15:40.815 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@456 -- # nvmf_veth_init 00:15:40.815 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:40.815 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:15:40.815 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:15:40.815 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:15:40.815 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:40.815 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:15:40.815 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:40.815 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:15:40.815 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:40.815 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:15:40.815 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:40.815 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:40.815 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:40.815 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:40.815 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:40.815 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:40.815 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:15:40.815 Cannot find device "nvmf_init_br" 00:15:40.815 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@162 -- # true 00:15:40.815 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:15:40.815 Cannot find device "nvmf_init_br2" 00:15:40.815 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@163 -- # true 00:15:40.815 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:15:41.074 Cannot find device "nvmf_tgt_br" 00:15:41.074 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@164 -- # true 00:15:41.074 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:15:41.074 Cannot find device "nvmf_tgt_br2" 00:15:41.074 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@165 -- # true 00:15:41.074 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:15:41.074 Cannot find device "nvmf_init_br" 00:15:41.074 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@166 -- # true 00:15:41.074 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:15:41.074 Cannot find device "nvmf_init_br2" 00:15:41.074 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@167 -- # true 00:15:41.074 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:15:41.074 Cannot find device "nvmf_tgt_br" 00:15:41.074 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@168 -- # true 00:15:41.074 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:15:41.074 Cannot find device "nvmf_tgt_br2" 00:15:41.074 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@169 -- # true 00:15:41.074 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:15:41.074 Cannot find device "nvmf_br" 00:15:41.074 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@170 -- # true 00:15:41.074 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:15:41.074 Cannot find device "nvmf_init_if" 00:15:41.074 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@171 -- # true 00:15:41.075 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:15:41.075 Cannot find device "nvmf_init_if2" 00:15:41.075 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@172 -- # true 00:15:41.075 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:41.075 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:41.075 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@173 -- # true 00:15:41.075 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:41.075 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:41.075 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@174 -- # true 00:15:41.075 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:15:41.075 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:41.075 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:15:41.075 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:41.075 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:41.075 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:41.075 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:41.075 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:41.075 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:15:41.075 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:15:41.075 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:15:41.075 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:15:41.075 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:15:41.075 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:15:41.075 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:15:41.075 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:15:41.075 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:15:41.075 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:41.075 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:41.075 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:41.075 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:15:41.075 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:15:41.075 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:15:41.075 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:15:41.075 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:41.334 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:41.334 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:41.334 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:15:41.334 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:15:41.334 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:15:41.334 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:41.334 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:15:41.334 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:15:41.334 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:41.334 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.069 ms 00:15:41.334 00:15:41.334 --- 10.0.0.3 ping statistics --- 00:15:41.334 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:41.334 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:15:41.334 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:15:41.334 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:15:41.334 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.057 ms 00:15:41.334 00:15:41.334 --- 10.0.0.4 ping statistics --- 00:15:41.334 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:41.334 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:15:41.334 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:41.334 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:41.334 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:15:41.334 00:15:41.334 --- 10.0.0.1 ping statistics --- 00:15:41.334 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:41.334 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:15:41.334 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:15:41.334 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:41.334 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.054 ms 00:15:41.334 00:15:41.334 --- 10.0.0.2 ping statistics --- 00:15:41.334 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:41.334 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:15:41.334 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:41.334 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@457 -- # return 0 00:15:41.334 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:15:41.334 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:41.334 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:15:41.334 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:15:41.334 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:41.334 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:15:41.334 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:15:41.334 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:15:41.334 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:15:41.334 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:41.334 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:15:41.334 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@505 -- # nvmfpid=84528 00:15:41.334 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:41.334 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@506 -- # waitforlisten 84528 00:15:41.334 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@831 -- # '[' -z 84528 ']' 00:15:41.335 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:41.335 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:41.335 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:41.335 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:41.335 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:41.335 06:08:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:15:41.335 [2024-10-01 06:08:06.875871] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:15:41.335 [2024-10-01 06:08:06.876643] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:41.594 [2024-10-01 06:08:07.019532] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:41.594 [2024-10-01 06:08:07.064033] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:41.594 [2024-10-01 06:08:07.064092] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:41.594 [2024-10-01 06:08:07.064105] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:41.594 [2024-10-01 06:08:07.064115] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:41.594 [2024-10-01 06:08:07.064124] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:41.594 [2024-10-01 06:08:07.064154] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:15:41.594 [2024-10-01 06:08:07.099648] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:41.594 06:08:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:41.594 06:08:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # return 0 00:15:41.594 06:08:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:15:41.594 06:08:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:41.594 06:08:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:15:41.594 06:08:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:41.594 06:08:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:15:41.594 06:08:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:15:41.594 06:08:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:15:41.594 06:08:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.9yM 00:15:41.594 06:08:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:15:41.594 06:08:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.9yM 00:15:41.594 06:08:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.9yM 00:15:41.594 06:08:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.9yM 00:15:41.594 06:08:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:42.163 [2024-10-01 06:08:07.523230] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:42.163 [2024-10-01 06:08:07.539130] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:15:42.163 [2024-10-01 06:08:07.539376] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:42.163 malloc0 00:15:42.163 06:08:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:42.163 06:08:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=84562 00:15:42.164 06:08:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:15:42.164 06:08:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 84562 /var/tmp/bdevperf.sock 00:15:42.164 06:08:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@831 -- # '[' -z 84562 ']' 00:15:42.164 06:08:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:42.164 06:08:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:42.164 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:42.164 06:08:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:42.164 06:08:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:42.164 06:08:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:15:42.164 [2024-10-01 06:08:07.696777] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:15:42.164 [2024-10-01 06:08:07.696926] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84562 ] 00:15:42.423 [2024-10-01 06:08:07.832119] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:42.423 [2024-10-01 06:08:07.877964] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:15:42.423 [2024-10-01 06:08:07.915696] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:42.423 06:08:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:42.423 06:08:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # return 0 00:15:42.423 06:08:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.9yM 00:15:42.684 06:08:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:15:42.943 [2024-10-01 06:08:08.543459] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:15:43.203 TLSTESTn1 00:15:43.203 06:08:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:43.203 Running I/O for 10 seconds... 00:15:53.471 3944.00 IOPS, 15.41 MiB/s 3987.50 IOPS, 15.58 MiB/s 4012.00 IOPS, 15.67 MiB/s 4215.00 IOPS, 16.46 MiB/s 4332.40 IOPS, 16.92 MiB/s 4399.33 IOPS, 17.18 MiB/s 4449.29 IOPS, 17.38 MiB/s 4482.12 IOPS, 17.51 MiB/s 4510.67 IOPS, 17.62 MiB/s 4536.20 IOPS, 17.72 MiB/s 00:15:53.471 Latency(us) 00:15:53.471 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:53.471 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:15:53.471 Verification LBA range: start 0x0 length 0x2000 00:15:53.471 TLSTESTn1 : 10.01 4542.09 17.74 0.00 0.00 28132.45 4676.89 25499.46 00:15:53.471 =================================================================================================================== 00:15:53.471 Total : 4542.09 17.74 0.00 0.00 28132.45 4676.89 25499.46 00:15:53.471 { 00:15:53.471 "results": [ 00:15:53.472 { 00:15:53.472 "job": "TLSTESTn1", 00:15:53.472 "core_mask": "0x4", 00:15:53.472 "workload": "verify", 00:15:53.472 "status": "finished", 00:15:53.472 "verify_range": { 00:15:53.472 "start": 0, 00:15:53.472 "length": 8192 00:15:53.472 }, 00:15:53.472 "queue_depth": 128, 00:15:53.472 "io_size": 4096, 00:15:53.472 "runtime": 10.013461, 00:15:53.472 "iops": 4542.0858981724705, 00:15:53.472 "mibps": 17.742523039736213, 00:15:53.472 "io_failed": 0, 00:15:53.472 "io_timeout": 0, 00:15:53.472 "avg_latency_us": 28132.453752813304, 00:15:53.472 "min_latency_us": 4676.887272727273, 00:15:53.472 "max_latency_us": 25499.46181818182 00:15:53.472 } 00:15:53.472 ], 00:15:53.472 "core_count": 1 00:15:53.472 } 00:15:53.472 06:08:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:15:53.472 06:08:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:15:53.472 06:08:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@808 -- # type=--id 00:15:53.472 06:08:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@809 -- # id=0 00:15:53.472 06:08:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:15:53.472 06:08:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:15:53.472 06:08:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:15:53.472 06:08:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:15:53.472 06:08:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # for n in $shm_files 00:15:53.472 06:08:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:15:53.472 nvmf_trace.0 00:15:53.472 06:08:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@823 -- # return 0 00:15:53.472 06:08:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 84562 00:15:53.472 06:08:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@950 -- # '[' -z 84562 ']' 00:15:53.472 06:08:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # kill -0 84562 00:15:53.472 06:08:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # uname 00:15:53.472 06:08:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:53.472 06:08:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84562 00:15:53.472 06:08:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:15:53.472 06:08:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:15:53.472 killing process with pid 84562 00:15:53.472 06:08:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84562' 00:15:53.472 06:08:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@969 -- # kill 84562 00:15:53.472 Received shutdown signal, test time was about 10.000000 seconds 00:15:53.472 00:15:53.472 Latency(us) 00:15:53.472 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:53.472 =================================================================================================================== 00:15:53.472 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:53.472 06:08:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@974 -- # wait 84562 00:15:53.730 06:08:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:15:53.730 06:08:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # nvmfcleanup 00:15:53.730 06:08:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:15:53.730 06:08:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:53.730 06:08:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:15:53.730 06:08:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:53.730 06:08:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:53.730 rmmod nvme_tcp 00:15:53.730 rmmod nvme_fabrics 00:15:53.730 rmmod nvme_keyring 00:15:53.730 06:08:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:53.730 06:08:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:15:53.730 06:08:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:15:53.730 06:08:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@513 -- # '[' -n 84528 ']' 00:15:53.730 06:08:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@514 -- # killprocess 84528 00:15:53.730 06:08:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@950 -- # '[' -z 84528 ']' 00:15:53.730 06:08:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # kill -0 84528 00:15:53.730 06:08:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # uname 00:15:53.730 06:08:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:53.730 06:08:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84528 00:15:53.730 killing process with pid 84528 00:15:53.730 06:08:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:15:53.730 06:08:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:15:53.730 06:08:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84528' 00:15:53.730 06:08:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@969 -- # kill 84528 00:15:53.730 06:08:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@974 -- # wait 84528 00:15:53.988 06:08:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:15:53.988 06:08:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:15:53.988 06:08:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:15:53.988 06:08:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:15:53.988 06:08:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@787 -- # iptables-save 00:15:53.988 06:08:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:15:53.988 06:08:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@787 -- # iptables-restore 00:15:53.988 06:08:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:53.988 06:08:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:15:53.988 06:08:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:15:53.988 06:08:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:15:53.988 06:08:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:15:53.988 06:08:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:15:53.988 06:08:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:15:53.988 06:08:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:15:53.988 06:08:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:15:53.988 06:08:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:15:53.988 06:08:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:15:53.988 06:08:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:15:53.988 06:08:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:15:53.988 06:08:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:53.988 06:08:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:53.988 06:08:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@246 -- # remove_spdk_ns 00:15:53.988 06:08:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:53.988 06:08:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:53.989 06:08:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:53.989 06:08:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@300 -- # return 0 00:15:53.989 06:08:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.9yM 00:15:53.989 ************************************ 00:15:53.989 END TEST nvmf_fips 00:15:53.989 ************************************ 00:15:53.989 00:15:53.989 real 0m13.593s 00:15:53.989 user 0m18.712s 00:15:53.989 sys 0m5.666s 00:15:53.989 06:08:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:53.989 06:08:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:15:54.248 06:08:19 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /home/vagrant/spdk_repo/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:15:54.248 06:08:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:54.248 06:08:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:54.248 06:08:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:54.248 ************************************ 00:15:54.248 START TEST nvmf_control_msg_list 00:15:54.248 ************************************ 00:15:54.248 06:08:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:15:54.248 * Looking for test storage... 00:15:54.248 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:54.248 06:08:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:15:54.248 06:08:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:15:54.248 06:08:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1681 -- # lcov --version 00:15:54.248 06:08:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:15:54.248 06:08:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:54.248 06:08:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:54.248 06:08:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:54.248 06:08:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:15:54.248 06:08:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:15:54.248 06:08:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:15:54.248 06:08:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:15:54.248 06:08:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:15:54.248 06:08:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:15:54.248 06:08:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:15:54.248 06:08:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:54.248 06:08:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:15:54.248 06:08:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:15:54.248 06:08:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:54.248 06:08:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:54.248 06:08:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:15:54.248 06:08:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:15:54.248 06:08:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:54.248 06:08:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:15:54.248 06:08:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:15:54.248 06:08:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:15:54.248 06:08:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:15:54.248 06:08:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:54.248 06:08:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:15:54.248 06:08:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:15:54.248 06:08:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:54.248 06:08:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:54.248 06:08:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:15:54.248 06:08:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:54.248 06:08:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:15:54.248 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:54.248 --rc genhtml_branch_coverage=1 00:15:54.248 --rc genhtml_function_coverage=1 00:15:54.248 --rc genhtml_legend=1 00:15:54.248 --rc geninfo_all_blocks=1 00:15:54.248 --rc geninfo_unexecuted_blocks=1 00:15:54.248 00:15:54.248 ' 00:15:54.248 06:08:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:15:54.248 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:54.248 --rc genhtml_branch_coverage=1 00:15:54.248 --rc genhtml_function_coverage=1 00:15:54.248 --rc genhtml_legend=1 00:15:54.248 --rc geninfo_all_blocks=1 00:15:54.248 --rc geninfo_unexecuted_blocks=1 00:15:54.248 00:15:54.248 ' 00:15:54.248 06:08:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:15:54.248 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:54.248 --rc genhtml_branch_coverage=1 00:15:54.249 --rc genhtml_function_coverage=1 00:15:54.249 --rc genhtml_legend=1 00:15:54.249 --rc geninfo_all_blocks=1 00:15:54.249 --rc geninfo_unexecuted_blocks=1 00:15:54.249 00:15:54.249 ' 00:15:54.249 06:08:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:15:54.249 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:54.249 --rc genhtml_branch_coverage=1 00:15:54.249 --rc genhtml_function_coverage=1 00:15:54.249 --rc genhtml_legend=1 00:15:54.249 --rc geninfo_all_blocks=1 00:15:54.249 --rc geninfo_unexecuted_blocks=1 00:15:54.249 00:15:54.249 ' 00:15:54.249 06:08:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:54.249 06:08:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:15:54.249 06:08:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:54.249 06:08:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:54.249 06:08:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:54.249 06:08:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:54.249 06:08:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:54.249 06:08:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:54.249 06:08:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:54.249 06:08:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:54.249 06:08:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:54.249 06:08:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:54.249 06:08:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 00:15:54.249 06:08:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=a979a798-a221-4879-b3c4-5aaa753fde06 00:15:54.249 06:08:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:54.249 06:08:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:54.249 06:08:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:54.249 06:08:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:54.249 06:08:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:54.249 06:08:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:15:54.249 06:08:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:54.249 06:08:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:54.249 06:08:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:54.249 06:08:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:54.249 06:08:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:54.249 06:08:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:54.249 06:08:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:15:54.249 06:08:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:54.249 06:08:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:15:54.249 06:08:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:54.249 06:08:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:54.249 06:08:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:54.249 06:08:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:54.249 06:08:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:54.249 06:08:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:54.249 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:54.249 06:08:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:54.249 06:08:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:54.249 06:08:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:54.249 06:08:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:15:54.249 06:08:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:15:54.249 06:08:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:54.249 06:08:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@472 -- # prepare_net_devs 00:15:54.249 06:08:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@434 -- # local -g is_hw=no 00:15:54.249 06:08:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@436 -- # remove_spdk_ns 00:15:54.249 06:08:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:54.249 06:08:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:54.249 06:08:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:54.508 06:08:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:15:54.508 06:08:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:15:54.508 06:08:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:15:54.508 06:08:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:15:54.508 06:08:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:15:54.508 06:08:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@456 -- # nvmf_veth_init 00:15:54.508 06:08:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:54.508 06:08:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:15:54.508 06:08:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:15:54.508 06:08:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:15:54.508 06:08:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:54.508 06:08:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:15:54.508 06:08:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:54.508 06:08:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:15:54.508 06:08:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:54.508 06:08:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:15:54.508 06:08:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:54.508 06:08:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:54.508 06:08:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:54.508 06:08:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:54.508 06:08:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:54.508 06:08:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:54.508 06:08:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:15:54.508 Cannot find device "nvmf_init_br" 00:15:54.508 06:08:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@162 -- # true 00:15:54.508 06:08:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:15:54.508 Cannot find device "nvmf_init_br2" 00:15:54.508 06:08:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@163 -- # true 00:15:54.508 06:08:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:15:54.508 Cannot find device "nvmf_tgt_br" 00:15:54.508 06:08:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@164 -- # true 00:15:54.508 06:08:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:15:54.508 Cannot find device "nvmf_tgt_br2" 00:15:54.508 06:08:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@165 -- # true 00:15:54.508 06:08:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:15:54.508 Cannot find device "nvmf_init_br" 00:15:54.508 06:08:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@166 -- # true 00:15:54.508 06:08:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:15:54.508 Cannot find device "nvmf_init_br2" 00:15:54.508 06:08:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@167 -- # true 00:15:54.508 06:08:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:15:54.508 Cannot find device "nvmf_tgt_br" 00:15:54.508 06:08:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@168 -- # true 00:15:54.508 06:08:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:15:54.508 Cannot find device "nvmf_tgt_br2" 00:15:54.508 06:08:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@169 -- # true 00:15:54.508 06:08:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:15:54.508 Cannot find device "nvmf_br" 00:15:54.508 06:08:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@170 -- # true 00:15:54.508 06:08:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:15:54.508 Cannot find device "nvmf_init_if" 00:15:54.508 06:08:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@171 -- # true 00:15:54.508 06:08:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:15:54.508 Cannot find device "nvmf_init_if2" 00:15:54.508 06:08:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@172 -- # true 00:15:54.508 06:08:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:54.508 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:54.508 06:08:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@173 -- # true 00:15:54.508 06:08:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:54.508 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:54.508 06:08:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@174 -- # true 00:15:54.508 06:08:19 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:15:54.508 06:08:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:54.508 06:08:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:15:54.508 06:08:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:54.508 06:08:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:54.508 06:08:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:54.508 06:08:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:54.768 06:08:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:54.768 06:08:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:15:54.768 06:08:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:15:54.768 06:08:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:15:54.768 06:08:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:15:54.768 06:08:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:15:54.768 06:08:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:15:54.768 06:08:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:15:54.768 06:08:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:15:54.768 06:08:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:15:54.768 06:08:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:54.768 06:08:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:54.768 06:08:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:54.768 06:08:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:15:54.768 06:08:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:15:54.768 06:08:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:15:54.768 06:08:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:15:54.768 06:08:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:54.768 06:08:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:54.768 06:08:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:54.768 06:08:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:15:54.768 06:08:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:15:54.768 06:08:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:15:54.768 06:08:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:54.768 06:08:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:15:54.768 06:08:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:15:54.768 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:54.768 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.067 ms 00:15:54.768 00:15:54.768 --- 10.0.0.3 ping statistics --- 00:15:54.768 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:54.768 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:15:54.768 06:08:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:15:54.768 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:15:54.768 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.040 ms 00:15:54.768 00:15:54.768 --- 10.0.0.4 ping statistics --- 00:15:54.768 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:54.768 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:15:54.768 06:08:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:54.768 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:54.768 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 00:15:54.768 00:15:54.768 --- 10.0.0.1 ping statistics --- 00:15:54.768 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:54.768 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 00:15:54.768 06:08:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:15:54.768 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:54.768 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.045 ms 00:15:54.768 00:15:54.768 --- 10.0.0.2 ping statistics --- 00:15:54.768 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:54.768 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:15:54.768 06:08:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:54.768 06:08:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@457 -- # return 0 00:15:54.768 06:08:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:15:54.768 06:08:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:54.768 06:08:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:15:54.768 06:08:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:15:54.768 06:08:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:54.768 06:08:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:15:54.768 06:08:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:15:54.768 06:08:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:15:54.768 06:08:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:15:54.768 06:08:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:54.768 06:08:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:15:54.768 06:08:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@505 -- # nvmfpid=84943 00:15:54.768 06:08:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:15:54.768 06:08:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@506 -- # waitforlisten 84943 00:15:54.768 06:08:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@831 -- # '[' -z 84943 ']' 00:15:54.768 06:08:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:54.768 06:08:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:54.768 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:54.768 06:08:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:54.768 06:08:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:54.768 06:08:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:15:54.768 [2024-10-01 06:08:20.359786] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:15:54.768 [2024-10-01 06:08:20.360059] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:55.028 [2024-10-01 06:08:20.500278] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:55.028 [2024-10-01 06:08:20.541759] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:55.028 [2024-10-01 06:08:20.541827] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:55.028 [2024-10-01 06:08:20.541842] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:55.028 [2024-10-01 06:08:20.541852] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:55.028 [2024-10-01 06:08:20.541861] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:55.028 [2024-10-01 06:08:20.541892] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:15:55.028 [2024-10-01 06:08:20.577407] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:55.028 06:08:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:55.028 06:08:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # return 0 00:15:55.028 06:08:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:15:55.028 06:08:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:55.028 06:08:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:15:55.287 06:08:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:55.287 06:08:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:15:55.287 06:08:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:15:55.287 06:08:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:15:55.287 06:08:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.287 06:08:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:15:55.288 [2024-10-01 06:08:20.672375] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:55.288 06:08:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.288 06:08:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:15:55.288 06:08:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.288 06:08:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:15:55.288 06:08:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.288 06:08:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:15:55.288 06:08:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.288 06:08:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:15:55.288 Malloc0 00:15:55.288 06:08:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.288 06:08:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:15:55.288 06:08:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.288 06:08:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:15:55.288 06:08:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.288 06:08:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:15:55.288 06:08:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.288 06:08:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:15:55.288 [2024-10-01 06:08:20.718446] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:55.288 06:08:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.288 06:08:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=84968 00:15:55.288 06:08:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:15:55.288 06:08:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=84969 00:15:55.288 06:08:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:15:55.288 06:08:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=84970 00:15:55.288 06:08:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:15:55.288 06:08:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 84968 00:15:55.288 [2024-10-01 06:08:20.897207] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:15:55.288 [2024-10-01 06:08:20.897841] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:15:55.288 [2024-10-01 06:08:20.898525] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:15:56.665 Initializing NVMe Controllers 00:15:56.665 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:15:56.665 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:15:56.665 Initialization complete. Launching workers. 00:15:56.665 ======================================================== 00:15:56.665 Latency(us) 00:15:56.665 Device Information : IOPS MiB/s Average min max 00:15:56.665 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 3844.97 15.02 259.80 177.93 3258.89 00:15:56.665 ======================================================== 00:15:56.665 Total : 3844.97 15.02 259.80 177.93 3258.89 00:15:56.665 00:15:56.665 Initializing NVMe Controllers 00:15:56.665 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:15:56.665 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:15:56.665 Initialization complete. Launching workers. 00:15:56.665 ======================================================== 00:15:56.665 Latency(us) 00:15:56.665 Device Information : IOPS MiB/s Average min max 00:15:56.665 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 3861.00 15.08 258.68 198.04 505.55 00:15:56.665 ======================================================== 00:15:56.665 Total : 3861.00 15.08 258.68 198.04 505.55 00:15:56.665 00:15:56.665 Initializing NVMe Controllers 00:15:56.665 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:15:56.665 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:15:56.665 Initialization complete. Launching workers. 00:15:56.665 ======================================================== 00:15:56.665 Latency(us) 00:15:56.665 Device Information : IOPS MiB/s Average min max 00:15:56.665 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 3861.00 15.08 258.67 196.65 464.12 00:15:56.665 ======================================================== 00:15:56.665 Total : 3861.00 15.08 258.67 196.65 464.12 00:15:56.665 00:15:56.665 06:08:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 84969 00:15:56.665 06:08:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 84970 00:15:56.665 06:08:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:15:56.665 06:08:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:15:56.665 06:08:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # nvmfcleanup 00:15:56.665 06:08:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:15:56.665 06:08:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:56.665 06:08:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:15:56.665 06:08:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:56.665 06:08:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:56.665 rmmod nvme_tcp 00:15:56.666 rmmod nvme_fabrics 00:15:56.666 rmmod nvme_keyring 00:15:56.666 06:08:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:56.666 06:08:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:15:56.666 06:08:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:15:56.666 06:08:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@513 -- # '[' -n 84943 ']' 00:15:56.666 06:08:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@514 -- # killprocess 84943 00:15:56.666 06:08:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@950 -- # '[' -z 84943 ']' 00:15:56.666 06:08:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # kill -0 84943 00:15:56.666 06:08:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@955 -- # uname 00:15:56.666 06:08:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:56.666 06:08:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84943 00:15:56.666 killing process with pid 84943 00:15:56.666 06:08:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:56.666 06:08:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:56.666 06:08:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84943' 00:15:56.666 06:08:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@969 -- # kill 84943 00:15:56.666 06:08:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@974 -- # wait 84943 00:15:56.666 06:08:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:15:56.666 06:08:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:15:56.666 06:08:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:15:56.666 06:08:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:15:56.666 06:08:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@787 -- # iptables-restore 00:15:56.666 06:08:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:15:56.666 06:08:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@787 -- # iptables-save 00:15:56.666 06:08:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:56.666 06:08:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:15:56.666 06:08:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:15:56.666 06:08:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:15:56.666 06:08:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:15:56.925 06:08:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:15:56.925 06:08:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:15:56.925 06:08:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:15:56.925 06:08:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:15:56.925 06:08:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:15:56.925 06:08:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:15:56.925 06:08:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:15:56.925 06:08:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:15:56.925 06:08:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:56.925 06:08:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:56.925 06:08:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@246 -- # remove_spdk_ns 00:15:56.925 06:08:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:56.925 06:08:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:56.925 06:08:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:56.925 06:08:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@300 -- # return 0 00:15:56.925 00:15:56.925 real 0m2.836s 00:15:56.925 user 0m4.677s 00:15:56.925 sys 0m1.279s 00:15:56.925 06:08:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:56.925 06:08:22 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:15:56.925 ************************************ 00:15:56.925 END TEST nvmf_control_msg_list 00:15:56.925 ************************************ 00:15:56.925 06:08:22 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /home/vagrant/spdk_repo/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:15:56.925 06:08:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:56.925 06:08:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:56.925 06:08:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:56.925 ************************************ 00:15:56.925 START TEST nvmf_wait_for_buf 00:15:56.925 ************************************ 00:15:56.925 06:08:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:15:57.184 * Looking for test storage... 00:15:57.184 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:57.184 06:08:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:15:57.184 06:08:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1681 -- # lcov --version 00:15:57.184 06:08:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:15:57.184 06:08:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:15:57.184 06:08:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:57.184 06:08:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:57.184 06:08:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:57.184 06:08:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:15:57.184 06:08:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:15:57.184 06:08:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:15:57.184 06:08:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:15:57.184 06:08:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:15:57.185 06:08:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:15:57.185 06:08:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:15:57.185 06:08:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:57.185 06:08:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:15:57.185 06:08:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:15:57.185 06:08:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:57.185 06:08:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:57.185 06:08:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:15:57.185 06:08:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:15:57.185 06:08:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:57.185 06:08:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:15:57.185 06:08:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:15:57.185 06:08:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:15:57.185 06:08:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:15:57.185 06:08:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:57.185 06:08:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:15:57.185 06:08:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:15:57.185 06:08:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:57.185 06:08:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:57.185 06:08:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:15:57.185 06:08:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:57.185 06:08:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:15:57.185 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:57.185 --rc genhtml_branch_coverage=1 00:15:57.185 --rc genhtml_function_coverage=1 00:15:57.185 --rc genhtml_legend=1 00:15:57.185 --rc geninfo_all_blocks=1 00:15:57.185 --rc geninfo_unexecuted_blocks=1 00:15:57.185 00:15:57.185 ' 00:15:57.185 06:08:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:15:57.185 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:57.185 --rc genhtml_branch_coverage=1 00:15:57.185 --rc genhtml_function_coverage=1 00:15:57.185 --rc genhtml_legend=1 00:15:57.185 --rc geninfo_all_blocks=1 00:15:57.185 --rc geninfo_unexecuted_blocks=1 00:15:57.185 00:15:57.185 ' 00:15:57.185 06:08:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:15:57.185 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:57.185 --rc genhtml_branch_coverage=1 00:15:57.185 --rc genhtml_function_coverage=1 00:15:57.185 --rc genhtml_legend=1 00:15:57.185 --rc geninfo_all_blocks=1 00:15:57.185 --rc geninfo_unexecuted_blocks=1 00:15:57.185 00:15:57.185 ' 00:15:57.185 06:08:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:15:57.185 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:57.185 --rc genhtml_branch_coverage=1 00:15:57.185 --rc genhtml_function_coverage=1 00:15:57.185 --rc genhtml_legend=1 00:15:57.185 --rc geninfo_all_blocks=1 00:15:57.185 --rc geninfo_unexecuted_blocks=1 00:15:57.185 00:15:57.185 ' 00:15:57.185 06:08:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:57.185 06:08:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:15:57.185 06:08:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:57.185 06:08:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:57.185 06:08:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:57.185 06:08:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:57.185 06:08:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:57.185 06:08:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:57.185 06:08:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:57.185 06:08:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:57.185 06:08:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:57.185 06:08:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:57.185 06:08:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 00:15:57.185 06:08:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=a979a798-a221-4879-b3c4-5aaa753fde06 00:15:57.185 06:08:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:57.185 06:08:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:57.185 06:08:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:57.185 06:08:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:57.185 06:08:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:57.185 06:08:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:15:57.185 06:08:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:57.185 06:08:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:57.185 06:08:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:57.185 06:08:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:57.185 06:08:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:57.185 06:08:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:57.185 06:08:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:15:57.185 06:08:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:57.185 06:08:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:15:57.185 06:08:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:57.185 06:08:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:57.185 06:08:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:57.185 06:08:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:57.185 06:08:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:57.185 06:08:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:57.185 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:57.185 06:08:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:57.185 06:08:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:57.185 06:08:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:57.185 06:08:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:15:57.185 06:08:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:15:57.185 06:08:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:57.185 06:08:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@472 -- # prepare_net_devs 00:15:57.185 06:08:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@434 -- # local -g is_hw=no 00:15:57.185 06:08:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@436 -- # remove_spdk_ns 00:15:57.185 06:08:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:57.185 06:08:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:57.185 06:08:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:57.186 06:08:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:15:57.186 06:08:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:15:57.186 06:08:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:15:57.186 06:08:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:15:57.186 06:08:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:15:57.186 06:08:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@456 -- # nvmf_veth_init 00:15:57.186 06:08:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:57.186 06:08:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:15:57.186 06:08:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:15:57.186 06:08:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:15:57.186 06:08:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:57.186 06:08:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:15:57.186 06:08:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:57.186 06:08:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:15:57.186 06:08:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:57.186 06:08:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:15:57.186 06:08:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:57.186 06:08:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:57.186 06:08:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:57.186 06:08:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:57.186 06:08:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:57.186 06:08:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:57.186 06:08:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:15:57.186 Cannot find device "nvmf_init_br" 00:15:57.186 06:08:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@162 -- # true 00:15:57.186 06:08:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:15:57.186 Cannot find device "nvmf_init_br2" 00:15:57.186 06:08:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@163 -- # true 00:15:57.186 06:08:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:15:57.186 Cannot find device "nvmf_tgt_br" 00:15:57.186 06:08:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@164 -- # true 00:15:57.186 06:08:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:15:57.186 Cannot find device "nvmf_tgt_br2" 00:15:57.186 06:08:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@165 -- # true 00:15:57.186 06:08:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:15:57.186 Cannot find device "nvmf_init_br" 00:15:57.186 06:08:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@166 -- # true 00:15:57.186 06:08:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:15:57.479 Cannot find device "nvmf_init_br2" 00:15:57.479 06:08:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@167 -- # true 00:15:57.479 06:08:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:15:57.479 Cannot find device "nvmf_tgt_br" 00:15:57.479 06:08:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@168 -- # true 00:15:57.479 06:08:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:15:57.479 Cannot find device "nvmf_tgt_br2" 00:15:57.479 06:08:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@169 -- # true 00:15:57.479 06:08:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:15:57.479 Cannot find device "nvmf_br" 00:15:57.479 06:08:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@170 -- # true 00:15:57.479 06:08:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:15:57.479 Cannot find device "nvmf_init_if" 00:15:57.479 06:08:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@171 -- # true 00:15:57.479 06:08:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:15:57.479 Cannot find device "nvmf_init_if2" 00:15:57.479 06:08:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@172 -- # true 00:15:57.479 06:08:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:57.479 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:57.479 06:08:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@173 -- # true 00:15:57.479 06:08:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:57.479 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:57.479 06:08:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@174 -- # true 00:15:57.479 06:08:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:15:57.479 06:08:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:57.479 06:08:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:15:57.479 06:08:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:57.479 06:08:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:57.479 06:08:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:57.479 06:08:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:57.479 06:08:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:57.479 06:08:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:15:57.479 06:08:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:15:57.479 06:08:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:15:57.479 06:08:22 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:15:57.479 06:08:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:15:57.479 06:08:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:15:57.479 06:08:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:15:57.479 06:08:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:15:57.479 06:08:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:15:57.479 06:08:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:57.479 06:08:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:57.479 06:08:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:57.479 06:08:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:15:57.479 06:08:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:15:57.479 06:08:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:15:57.479 06:08:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:15:57.479 06:08:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:57.479 06:08:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:57.737 06:08:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:57.737 06:08:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:15:57.737 06:08:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:15:57.737 06:08:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:15:57.737 06:08:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:57.737 06:08:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:15:57.737 06:08:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:15:57.737 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:57.737 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.061 ms 00:15:57.737 00:15:57.737 --- 10.0.0.3 ping statistics --- 00:15:57.737 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:57.737 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:15:57.737 06:08:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:15:57.737 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:15:57.737 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.048 ms 00:15:57.737 00:15:57.737 --- 10.0.0.4 ping statistics --- 00:15:57.737 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:57.737 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:15:57.737 06:08:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:57.737 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:57.737 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:15:57.737 00:15:57.737 --- 10.0.0.1 ping statistics --- 00:15:57.737 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:57.737 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:15:57.737 06:08:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:15:57.737 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:57.737 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.069 ms 00:15:57.737 00:15:57.737 --- 10.0.0.2 ping statistics --- 00:15:57.737 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:57.737 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:15:57.737 06:08:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:57.737 06:08:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@457 -- # return 0 00:15:57.737 06:08:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:15:57.737 06:08:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:57.737 06:08:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:15:57.737 06:08:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:15:57.737 06:08:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:57.737 06:08:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:15:57.737 06:08:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:15:57.737 06:08:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:15:57.737 06:08:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:15:57.737 06:08:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:57.737 06:08:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:15:57.737 06:08:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@505 -- # nvmfpid=85206 00:15:57.738 06:08:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:15:57.738 06:08:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@506 -- # waitforlisten 85206 00:15:57.738 06:08:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@831 -- # '[' -z 85206 ']' 00:15:57.738 06:08:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:57.738 06:08:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:57.738 06:08:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:57.738 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:57.738 06:08:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:57.738 06:08:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:15:57.738 [2024-10-01 06:08:23.223684] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:15:57.738 [2024-10-01 06:08:23.223786] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:57.996 [2024-10-01 06:08:23.363250] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:57.996 [2024-10-01 06:08:23.394031] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:57.996 [2024-10-01 06:08:23.394096] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:57.996 [2024-10-01 06:08:23.394122] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:57.996 [2024-10-01 06:08:23.394129] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:57.996 [2024-10-01 06:08:23.394135] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:57.996 [2024-10-01 06:08:23.394159] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:15:58.561 06:08:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:58.561 06:08:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # return 0 00:15:58.561 06:08:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:15:58.561 06:08:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:58.561 06:08:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:15:58.561 06:08:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:58.561 06:08:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:15:58.561 06:08:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:15:58.561 06:08:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:15:58.561 06:08:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.561 06:08:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:15:58.820 06:08:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.820 06:08:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:15:58.820 06:08:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.820 06:08:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:15:58.820 06:08:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.820 06:08:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:15:58.820 06:08:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.820 06:08:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:15:58.820 [2024-10-01 06:08:24.211412] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:15:58.820 06:08:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.820 06:08:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:15:58.820 06:08:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.820 06:08:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:15:58.820 Malloc0 00:15:58.820 06:08:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.820 06:08:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:15:58.820 06:08:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.820 06:08:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:15:58.820 [2024-10-01 06:08:24.253128] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:58.820 06:08:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.820 06:08:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:15:58.820 06:08:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.820 06:08:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:15:58.820 06:08:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.820 06:08:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:15:58.820 06:08:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.820 06:08:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:15:58.820 06:08:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.820 06:08:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:15:58.820 06:08:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.820 06:08:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:15:58.820 [2024-10-01 06:08:24.289189] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:58.820 06:08:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.820 06:08:24 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:15:59.079 [2024-10-01 06:08:24.456062] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:16:00.457 Initializing NVMe Controllers 00:16:00.457 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:16:00.457 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:16:00.457 Initialization complete. Launching workers. 00:16:00.457 ======================================================== 00:16:00.457 Latency(us) 00:16:00.457 Device Information : IOPS MiB/s Average min max 00:16:00.457 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 496.00 62.00 8104.81 6954.60 11934.92 00:16:00.457 ======================================================== 00:16:00.457 Total : 496.00 62.00 8104.81 6954.60 11934.92 00:16:00.457 00:16:00.457 06:08:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:16:00.457 06:08:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:16:00.457 06:08:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.457 06:08:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:16:00.457 06:08:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.457 06:08:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=4712 00:16:00.457 06:08:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 4712 -eq 0 ]] 00:16:00.457 06:08:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:16:00.457 06:08:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:16:00.457 06:08:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # nvmfcleanup 00:16:00.457 06:08:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:16:00.457 06:08:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:00.457 06:08:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:16:00.457 06:08:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:00.457 06:08:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:00.457 rmmod nvme_tcp 00:16:00.457 rmmod nvme_fabrics 00:16:00.457 rmmod nvme_keyring 00:16:00.458 06:08:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:00.458 06:08:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:16:00.458 06:08:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:16:00.458 06:08:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@513 -- # '[' -n 85206 ']' 00:16:00.458 06:08:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@514 -- # killprocess 85206 00:16:00.458 06:08:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@950 -- # '[' -z 85206 ']' 00:16:00.458 06:08:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # kill -0 85206 00:16:00.458 06:08:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@955 -- # uname 00:16:00.458 06:08:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:00.458 06:08:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85206 00:16:00.458 06:08:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:00.458 killing process with pid 85206 00:16:00.458 06:08:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:00.458 06:08:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85206' 00:16:00.458 06:08:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@969 -- # kill 85206 00:16:00.458 06:08:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@974 -- # wait 85206 00:16:00.458 06:08:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:16:00.458 06:08:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:16:00.458 06:08:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:16:00.458 06:08:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:16:00.458 06:08:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@787 -- # iptables-restore 00:16:00.458 06:08:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@787 -- # iptables-save 00:16:00.458 06:08:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:16:00.458 06:08:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:00.458 06:08:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:16:00.458 06:08:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:16:00.717 06:08:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:16:00.717 06:08:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:16:00.717 06:08:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:16:00.717 06:08:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:16:00.717 06:08:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:16:00.717 06:08:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:16:00.717 06:08:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:16:00.717 06:08:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:16:00.717 06:08:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:16:00.717 06:08:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:16:00.717 06:08:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:00.717 06:08:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:00.717 06:08:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@246 -- # remove_spdk_ns 00:16:00.717 06:08:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:00.717 06:08:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:00.717 06:08:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:00.717 06:08:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@300 -- # return 0 00:16:00.717 00:16:00.717 real 0m3.775s 00:16:00.717 user 0m3.297s 00:16:00.717 sys 0m0.752s 00:16:00.717 06:08:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:00.717 ************************************ 00:16:00.717 06:08:26 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:16:00.717 END TEST nvmf_wait_for_buf 00:16:00.717 ************************************ 00:16:00.978 06:08:26 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 1 -eq 1 ']' 00:16:00.978 06:08:26 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@48 -- # run_test nvmf_fuzz /home/vagrant/spdk_repo/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:16:00.978 06:08:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:16:00.978 06:08:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:00.978 06:08:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:00.978 ************************************ 00:16:00.978 START TEST nvmf_fuzz 00:16:00.978 ************************************ 00:16:00.978 06:08:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:16:00.978 * Looking for test storage... 00:16:00.978 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:00.978 06:08:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:16:00.978 06:08:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1681 -- # lcov --version 00:16:00.978 06:08:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:16:00.978 06:08:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:16:00.978 06:08:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:00.978 06:08:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:00.978 06:08:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:00.978 06:08:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:16:00.978 06:08:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:16:00.978 06:08:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:16:00.978 06:08:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:16:00.978 06:08:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:16:00.978 06:08:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:16:00.978 06:08:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:16:00.978 06:08:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:00.978 06:08:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:16:00.978 06:08:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@345 -- # : 1 00:16:00.978 06:08:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:00.978 06:08:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:00.978 06:08:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@365 -- # decimal 1 00:16:00.978 06:08:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@353 -- # local d=1 00:16:00.978 06:08:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:00.978 06:08:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@355 -- # echo 1 00:16:00.978 06:08:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:16:00.978 06:08:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@366 -- # decimal 2 00:16:00.978 06:08:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@353 -- # local d=2 00:16:00.978 06:08:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:00.978 06:08:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@355 -- # echo 2 00:16:00.978 06:08:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:16:00.978 06:08:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:00.978 06:08:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:00.978 06:08:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@368 -- # return 0 00:16:00.978 06:08:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:00.978 06:08:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:16:00.978 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:00.978 --rc genhtml_branch_coverage=1 00:16:00.978 --rc genhtml_function_coverage=1 00:16:00.978 --rc genhtml_legend=1 00:16:00.978 --rc geninfo_all_blocks=1 00:16:00.978 --rc geninfo_unexecuted_blocks=1 00:16:00.978 00:16:00.978 ' 00:16:00.978 06:08:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:16:00.978 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:00.978 --rc genhtml_branch_coverage=1 00:16:00.978 --rc genhtml_function_coverage=1 00:16:00.978 --rc genhtml_legend=1 00:16:00.978 --rc geninfo_all_blocks=1 00:16:00.978 --rc geninfo_unexecuted_blocks=1 00:16:00.978 00:16:00.978 ' 00:16:00.978 06:08:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:16:00.978 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:00.978 --rc genhtml_branch_coverage=1 00:16:00.978 --rc genhtml_function_coverage=1 00:16:00.978 --rc genhtml_legend=1 00:16:00.978 --rc geninfo_all_blocks=1 00:16:00.978 --rc geninfo_unexecuted_blocks=1 00:16:00.978 00:16:00.978 ' 00:16:00.978 06:08:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:16:00.978 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:00.978 --rc genhtml_branch_coverage=1 00:16:00.978 --rc genhtml_function_coverage=1 00:16:00.978 --rc genhtml_legend=1 00:16:00.978 --rc geninfo_all_blocks=1 00:16:00.978 --rc geninfo_unexecuted_blocks=1 00:16:00.978 00:16:00.978 ' 00:16:00.978 06:08:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:00.979 06:08:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # uname -s 00:16:00.979 06:08:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:00.979 06:08:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:00.979 06:08:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:00.979 06:08:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:00.979 06:08:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:00.979 06:08:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:00.979 06:08:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:00.979 06:08:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:00.979 06:08:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:00.979 06:08:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:00.979 06:08:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 00:16:00.979 06:08:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=a979a798-a221-4879-b3c4-5aaa753fde06 00:16:00.979 06:08:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:00.979 06:08:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:00.979 06:08:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:00.979 06:08:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:00.979 06:08:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:00.979 06:08:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:16:00.979 06:08:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:00.979 06:08:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:00.979 06:08:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:00.979 06:08:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:00.979 06:08:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:00.979 06:08:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:00.979 06:08:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@5 -- # export PATH 00:16:00.979 06:08:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:00.979 06:08:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@51 -- # : 0 00:16:00.979 06:08:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:00.979 06:08:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:00.979 06:08:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:00.979 06:08:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:00.979 06:08:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:00.979 06:08:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:00.979 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:00.979 06:08:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:00.979 06:08:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:00.979 06:08:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:00.979 06:08:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:16:00.979 06:08:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:16:00.979 06:08:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:00.979 06:08:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@472 -- # prepare_net_devs 00:16:00.979 06:08:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@434 -- # local -g is_hw=no 00:16:00.979 06:08:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@436 -- # remove_spdk_ns 00:16:00.979 06:08:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:00.979 06:08:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:00.979 06:08:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:00.979 06:08:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:16:00.979 06:08:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:16:00.979 06:08:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:16:00.979 06:08:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:16:00.979 06:08:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:16:00.979 06:08:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@456 -- # nvmf_veth_init 00:16:00.979 06:08:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:00.979 06:08:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:16:00.979 06:08:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:16:00.979 06:08:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:16:00.979 06:08:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:00.979 06:08:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:16:00.979 06:08:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:00.979 06:08:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:16:00.979 06:08:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:00.979 06:08:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:16:00.979 06:08:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:00.979 06:08:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:00.979 06:08:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:00.979 06:08:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:00.979 06:08:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:00.979 06:08:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:00.979 06:08:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:16:01.239 Cannot find device "nvmf_init_br" 00:16:01.239 06:08:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@162 -- # true 00:16:01.239 06:08:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:16:01.239 Cannot find device "nvmf_init_br2" 00:16:01.239 06:08:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@163 -- # true 00:16:01.239 06:08:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:16:01.239 Cannot find device "nvmf_tgt_br" 00:16:01.239 06:08:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@164 -- # true 00:16:01.239 06:08:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:16:01.239 Cannot find device "nvmf_tgt_br2" 00:16:01.239 06:08:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@165 -- # true 00:16:01.239 06:08:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:16:01.239 Cannot find device "nvmf_init_br" 00:16:01.239 06:08:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@166 -- # true 00:16:01.239 06:08:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:16:01.239 Cannot find device "nvmf_init_br2" 00:16:01.239 06:08:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@167 -- # true 00:16:01.239 06:08:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:16:01.239 Cannot find device "nvmf_tgt_br" 00:16:01.239 06:08:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@168 -- # true 00:16:01.239 06:08:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:16:01.239 Cannot find device "nvmf_tgt_br2" 00:16:01.239 06:08:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@169 -- # true 00:16:01.239 06:08:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:16:01.239 Cannot find device "nvmf_br" 00:16:01.239 06:08:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@170 -- # true 00:16:01.239 06:08:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:16:01.239 Cannot find device "nvmf_init_if" 00:16:01.239 06:08:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@171 -- # true 00:16:01.239 06:08:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:16:01.239 Cannot find device "nvmf_init_if2" 00:16:01.239 06:08:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@172 -- # true 00:16:01.239 06:08:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:01.239 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:01.239 06:08:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@173 -- # true 00:16:01.239 06:08:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:01.239 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:01.239 06:08:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@174 -- # true 00:16:01.239 06:08:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:16:01.239 06:08:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:01.239 06:08:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:16:01.239 06:08:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:01.239 06:08:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:01.239 06:08:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:01.239 06:08:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:01.239 06:08:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:01.239 06:08:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:16:01.239 06:08:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:16:01.239 06:08:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:16:01.239 06:08:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:16:01.239 06:08:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:16:01.239 06:08:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:16:01.239 06:08:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:16:01.239 06:08:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:16:01.239 06:08:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:16:01.239 06:08:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:01.239 06:08:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:01.499 06:08:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:01.499 06:08:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:16:01.499 06:08:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:16:01.499 06:08:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:16:01.499 06:08:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:16:01.499 06:08:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:01.499 06:08:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:01.499 06:08:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:01.499 06:08:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:16:01.499 06:08:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:16:01.499 06:08:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:16:01.499 06:08:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:01.499 06:08:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:16:01.499 06:08:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:16:01.499 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:01.499 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.055 ms 00:16:01.499 00:16:01.499 --- 10.0.0.3 ping statistics --- 00:16:01.499 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:01.499 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:16:01.499 06:08:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:16:01.499 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:16:01.499 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.056 ms 00:16:01.499 00:16:01.499 --- 10.0.0.4 ping statistics --- 00:16:01.499 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:01.499 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:16:01.499 06:08:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:01.499 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:01.499 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.038 ms 00:16:01.499 00:16:01.499 --- 10.0.0.1 ping statistics --- 00:16:01.499 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:01.499 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:16:01.499 06:08:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:16:01.499 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:01.499 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.056 ms 00:16:01.499 00:16:01.499 --- 10.0.0.2 ping statistics --- 00:16:01.499 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:01.499 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:16:01.499 06:08:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:01.499 06:08:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@457 -- # return 0 00:16:01.499 06:08:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:16:01.499 06:08:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:01.499 06:08:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:16:01.499 06:08:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:16:01.499 06:08:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:01.499 06:08:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:16:01.499 06:08:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:16:01.499 06:08:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@14 -- # nvmfpid=85472 00:16:01.499 06:08:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@13 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:16:01.499 06:08:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:16:01.499 06:08:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@18 -- # waitforlisten 85472 00:16:01.499 06:08:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@831 -- # '[' -z 85472 ']' 00:16:01.499 06:08:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:01.499 06:08:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:01.499 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:01.499 06:08:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:01.499 06:08:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:01.499 06:08:26 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:01.758 06:08:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:01.758 06:08:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@864 -- # return 0 00:16:01.758 06:08:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:01.758 06:08:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.758 06:08:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:01.758 06:08:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.758 06:08:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:16:01.758 06:08:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.758 06:08:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:01.758 Malloc0 00:16:01.758 06:08:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.758 06:08:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:01.758 06:08:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.759 06:08:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:01.759 06:08:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.759 06:08:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:01.759 06:08:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.759 06:08:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:01.759 06:08:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.759 06:08:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:16:01.759 06:08:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.759 06:08:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:02.017 06:08:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.017 06:08:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.3 trsvcid:4420' 00:16:02.017 06:08:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@30 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.3 trsvcid:4420' -N -a 00:16:02.277 Shutting down the fuzz application 00:16:02.277 06:08:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.3 trsvcid:4420' -j /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:16:02.537 Shutting down the fuzz application 00:16:02.537 06:08:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:02.537 06:08:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.537 06:08:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:02.537 06:08:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.537 06:08:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:16:02.537 06:08:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:16:02.537 06:08:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@512 -- # nvmfcleanup 00:16:02.537 06:08:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@121 -- # sync 00:16:02.537 06:08:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:02.537 06:08:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@124 -- # set +e 00:16:02.537 06:08:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:02.537 06:08:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:02.537 rmmod nvme_tcp 00:16:02.537 rmmod nvme_fabrics 00:16:02.537 rmmod nvme_keyring 00:16:02.537 06:08:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:02.537 06:08:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@128 -- # set -e 00:16:02.537 06:08:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@129 -- # return 0 00:16:02.537 06:08:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@513 -- # '[' -n 85472 ']' 00:16:02.537 06:08:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@514 -- # killprocess 85472 00:16:02.537 06:08:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@950 -- # '[' -z 85472 ']' 00:16:02.537 06:08:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@954 -- # kill -0 85472 00:16:02.537 06:08:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@955 -- # uname 00:16:02.537 06:08:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:02.537 06:08:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85472 00:16:02.537 06:08:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:02.537 06:08:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:02.537 killing process with pid 85472 00:16:02.537 06:08:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85472' 00:16:02.537 06:08:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@969 -- # kill 85472 00:16:02.537 06:08:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@974 -- # wait 85472 00:16:02.814 06:08:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:16:02.814 06:08:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:16:02.814 06:08:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:16:02.814 06:08:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@297 -- # iptr 00:16:02.814 06:08:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@787 -- # iptables-save 00:16:02.814 06:08:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@787 -- # iptables-restore 00:16:02.814 06:08:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:16:02.814 06:08:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:02.814 06:08:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:16:02.814 06:08:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:16:02.814 06:08:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:16:02.814 06:08:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:16:02.814 06:08:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:16:02.814 06:08:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:16:02.814 06:08:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:16:02.814 06:08:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:16:02.814 06:08:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:16:02.814 06:08:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:16:02.814 06:08:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:16:02.814 06:08:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:16:03.078 06:08:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:03.078 06:08:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:03.078 06:08:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@246 -- # remove_spdk_ns 00:16:03.078 06:08:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:03.078 06:08:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:03.078 06:08:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:03.079 06:08:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@300 -- # return 0 00:16:03.079 06:08:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@39 -- # rm /home/vagrant/spdk_repo/spdk/../output/nvmf_fuzz_logs1.txt /home/vagrant/spdk_repo/spdk/../output/nvmf_fuzz_logs2.txt 00:16:03.079 00:16:03.079 real 0m2.151s 00:16:03.079 user 0m1.837s 00:16:03.079 sys 0m0.659s 00:16:03.079 06:08:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:03.079 ************************************ 00:16:03.079 END TEST nvmf_fuzz 00:16:03.079 06:08:28 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:03.079 ************************************ 00:16:03.079 06:08:28 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@49 -- # run_test nvmf_multiconnection /home/vagrant/spdk_repo/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:16:03.079 06:08:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:16:03.079 06:08:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:03.079 06:08:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:03.079 ************************************ 00:16:03.079 START TEST nvmf_multiconnection 00:16:03.079 ************************************ 00:16:03.079 06:08:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:16:03.079 * Looking for test storage... 00:16:03.079 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:03.079 06:08:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:16:03.079 06:08:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:16:03.079 06:08:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1681 -- # lcov --version 00:16:03.339 06:08:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:16:03.339 06:08:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:03.339 06:08:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:03.339 06:08:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:03.339 06:08:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@336 -- # IFS=.-: 00:16:03.339 06:08:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@336 -- # read -ra ver1 00:16:03.339 06:08:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@337 -- # IFS=.-: 00:16:03.339 06:08:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@337 -- # read -ra ver2 00:16:03.339 06:08:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@338 -- # local 'op=<' 00:16:03.339 06:08:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@340 -- # ver1_l=2 00:16:03.339 06:08:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@341 -- # ver2_l=1 00:16:03.339 06:08:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:03.339 06:08:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@344 -- # case "$op" in 00:16:03.339 06:08:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@345 -- # : 1 00:16:03.339 06:08:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:03.339 06:08:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:03.339 06:08:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@365 -- # decimal 1 00:16:03.339 06:08:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@353 -- # local d=1 00:16:03.339 06:08:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:03.339 06:08:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@355 -- # echo 1 00:16:03.339 06:08:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@365 -- # ver1[v]=1 00:16:03.339 06:08:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@366 -- # decimal 2 00:16:03.339 06:08:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@353 -- # local d=2 00:16:03.339 06:08:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:03.339 06:08:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@355 -- # echo 2 00:16:03.339 06:08:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@366 -- # ver2[v]=2 00:16:03.339 06:08:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:03.339 06:08:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:03.339 06:08:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@368 -- # return 0 00:16:03.339 06:08:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:03.339 06:08:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:16:03.339 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:03.339 --rc genhtml_branch_coverage=1 00:16:03.339 --rc genhtml_function_coverage=1 00:16:03.339 --rc genhtml_legend=1 00:16:03.339 --rc geninfo_all_blocks=1 00:16:03.339 --rc geninfo_unexecuted_blocks=1 00:16:03.339 00:16:03.339 ' 00:16:03.339 06:08:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:16:03.340 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:03.340 --rc genhtml_branch_coverage=1 00:16:03.340 --rc genhtml_function_coverage=1 00:16:03.340 --rc genhtml_legend=1 00:16:03.340 --rc geninfo_all_blocks=1 00:16:03.340 --rc geninfo_unexecuted_blocks=1 00:16:03.340 00:16:03.340 ' 00:16:03.340 06:08:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:16:03.340 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:03.340 --rc genhtml_branch_coverage=1 00:16:03.340 --rc genhtml_function_coverage=1 00:16:03.340 --rc genhtml_legend=1 00:16:03.340 --rc geninfo_all_blocks=1 00:16:03.340 --rc geninfo_unexecuted_blocks=1 00:16:03.340 00:16:03.340 ' 00:16:03.340 06:08:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:16:03.340 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:03.340 --rc genhtml_branch_coverage=1 00:16:03.340 --rc genhtml_function_coverage=1 00:16:03.340 --rc genhtml_legend=1 00:16:03.340 --rc geninfo_all_blocks=1 00:16:03.340 --rc geninfo_unexecuted_blocks=1 00:16:03.340 00:16:03.340 ' 00:16:03.340 06:08:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:03.340 06:08:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # uname -s 00:16:03.340 06:08:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:03.340 06:08:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:03.340 06:08:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:03.340 06:08:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:03.340 06:08:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:03.340 06:08:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:03.340 06:08:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:03.340 06:08:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:03.340 06:08:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:03.340 06:08:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:03.340 06:08:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 00:16:03.340 06:08:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@18 -- # NVME_HOSTID=a979a798-a221-4879-b3c4-5aaa753fde06 00:16:03.340 06:08:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:03.340 06:08:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:03.340 06:08:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:03.340 06:08:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:03.340 06:08:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:03.340 06:08:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@15 -- # shopt -s extglob 00:16:03.340 06:08:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:03.340 06:08:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:03.340 06:08:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:03.340 06:08:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:03.340 06:08:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:03.340 06:08:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:03.340 06:08:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@5 -- # export PATH 00:16:03.340 06:08:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:03.340 06:08:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@51 -- # : 0 00:16:03.340 06:08:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:03.340 06:08:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:03.340 06:08:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:03.340 06:08:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:03.340 06:08:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:03.340 06:08:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:03.340 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:03.340 06:08:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:03.340 06:08:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:03.340 06:08:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:03.340 06:08:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:03.340 06:08:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:03.340 06:08:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:16:03.340 06:08:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@16 -- # nvmftestinit 00:16:03.340 06:08:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:16:03.340 06:08:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:03.340 06:08:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@472 -- # prepare_net_devs 00:16:03.340 06:08:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@434 -- # local -g is_hw=no 00:16:03.340 06:08:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@436 -- # remove_spdk_ns 00:16:03.340 06:08:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:03.340 06:08:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:03.340 06:08:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:03.340 06:08:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:16:03.340 06:08:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:16:03.340 06:08:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:16:03.340 06:08:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:16:03.340 06:08:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:16:03.340 06:08:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@456 -- # nvmf_veth_init 00:16:03.340 06:08:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:03.340 06:08:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:16:03.340 06:08:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:16:03.340 06:08:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:16:03.340 06:08:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:03.340 06:08:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:16:03.340 06:08:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:03.340 06:08:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:16:03.340 06:08:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:03.340 06:08:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:16:03.340 06:08:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:03.340 06:08:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:03.340 06:08:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:03.340 06:08:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:03.340 06:08:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:03.340 06:08:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:03.340 06:08:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:16:03.340 Cannot find device "nvmf_init_br" 00:16:03.340 06:08:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@162 -- # true 00:16:03.340 06:08:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:16:03.340 Cannot find device "nvmf_init_br2" 00:16:03.341 06:08:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@163 -- # true 00:16:03.341 06:08:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:16:03.341 Cannot find device "nvmf_tgt_br" 00:16:03.341 06:08:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@164 -- # true 00:16:03.341 06:08:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:16:03.341 Cannot find device "nvmf_tgt_br2" 00:16:03.341 06:08:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@165 -- # true 00:16:03.341 06:08:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:16:03.341 Cannot find device "nvmf_init_br" 00:16:03.341 06:08:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@166 -- # true 00:16:03.341 06:08:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:16:03.341 Cannot find device "nvmf_init_br2" 00:16:03.341 06:08:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@167 -- # true 00:16:03.341 06:08:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:16:03.341 Cannot find device "nvmf_tgt_br" 00:16:03.341 06:08:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@168 -- # true 00:16:03.341 06:08:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:16:03.341 Cannot find device "nvmf_tgt_br2" 00:16:03.341 06:08:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@169 -- # true 00:16:03.341 06:08:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:16:03.341 Cannot find device "nvmf_br" 00:16:03.341 06:08:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@170 -- # true 00:16:03.341 06:08:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:16:03.341 Cannot find device "nvmf_init_if" 00:16:03.341 06:08:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@171 -- # true 00:16:03.341 06:08:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:16:03.341 Cannot find device "nvmf_init_if2" 00:16:03.341 06:08:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@172 -- # true 00:16:03.341 06:08:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:03.341 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:03.341 06:08:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@173 -- # true 00:16:03.341 06:08:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:03.341 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:03.341 06:08:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@174 -- # true 00:16:03.341 06:08:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:16:03.341 06:08:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:03.341 06:08:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:16:03.341 06:08:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:03.341 06:08:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:03.598 06:08:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:03.598 06:08:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:03.598 06:08:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:03.598 06:08:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:16:03.598 06:08:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:16:03.598 06:08:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:16:03.598 06:08:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:16:03.598 06:08:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:16:03.598 06:08:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:16:03.598 06:08:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:16:03.598 06:08:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:16:03.598 06:08:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:16:03.598 06:08:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:03.598 06:08:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:03.598 06:08:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:03.598 06:08:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:16:03.598 06:08:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:16:03.598 06:08:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:16:03.598 06:08:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:16:03.598 06:08:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:03.598 06:08:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:03.598 06:08:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:03.598 06:08:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:16:03.598 06:08:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:16:03.598 06:08:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:16:03.598 06:08:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:03.598 06:08:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:16:03.598 06:08:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:16:03.598 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:03.598 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.083 ms 00:16:03.598 00:16:03.598 --- 10.0.0.3 ping statistics --- 00:16:03.598 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:03.598 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:16:03.598 06:08:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:16:03.598 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:16:03.598 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.056 ms 00:16:03.598 00:16:03.598 --- 10.0.0.4 ping statistics --- 00:16:03.598 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:03.598 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:16:03.598 06:08:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:03.598 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:03.598 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:16:03.598 00:16:03.598 --- 10.0.0.1 ping statistics --- 00:16:03.598 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:03.598 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:16:03.598 06:08:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:16:03.598 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:03.598 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.060 ms 00:16:03.598 00:16:03.598 --- 10.0.0.2 ping statistics --- 00:16:03.598 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:03.598 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:16:03.598 06:08:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:03.598 06:08:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@457 -- # return 0 00:16:03.598 06:08:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:16:03.598 06:08:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:03.598 06:08:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:16:03.598 06:08:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:16:03.598 06:08:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:03.598 06:08:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:16:03.598 06:08:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:16:03.598 06:08:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:16:03.598 06:08:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:16:03.857 06:08:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:03.857 06:08:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:03.857 06:08:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@505 -- # nvmfpid=85704 00:16:03.857 06:08:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:03.857 06:08:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@506 -- # waitforlisten 85704 00:16:03.857 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:03.857 06:08:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@831 -- # '[' -z 85704 ']' 00:16:03.857 06:08:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:03.857 06:08:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:03.857 06:08:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:03.857 06:08:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:03.857 06:08:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:03.857 [2024-10-01 06:08:29.280149] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:16:03.857 [2024-10-01 06:08:29.280445] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:03.857 [2024-10-01 06:08:29.423277] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:03.857 [2024-10-01 06:08:29.467515] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:03.857 [2024-10-01 06:08:29.467819] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:03.857 [2024-10-01 06:08:29.468006] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:03.857 [2024-10-01 06:08:29.468190] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:03.857 [2024-10-01 06:08:29.468232] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:03.857 [2024-10-01 06:08:29.468506] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:16:03.857 [2024-10-01 06:08:29.468653] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:16:03.857 [2024-10-01 06:08:29.468738] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:16:03.857 [2024-10-01 06:08:29.468740] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:04.117 [2024-10-01 06:08:29.504732] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:04.117 06:08:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:04.117 06:08:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@864 -- # return 0 00:16:04.117 06:08:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:16:04.117 06:08:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:04.117 06:08:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:04.117 06:08:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:04.117 06:08:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:04.117 06:08:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.117 06:08:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:04.117 [2024-10-01 06:08:29.606096] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:04.117 06:08:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.117 06:08:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # seq 1 11 00:16:04.117 06:08:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:04.117 06:08:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:16:04.117 06:08:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.117 06:08:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:04.117 Malloc1 00:16:04.117 06:08:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.117 06:08:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:16:04.117 06:08:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.117 06:08:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:04.117 06:08:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.117 06:08:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:04.117 06:08:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.117 06:08:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:04.117 06:08:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.117 06:08:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:16:04.117 06:08:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.117 06:08:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:04.117 [2024-10-01 06:08:29.662968] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:04.117 06:08:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.117 06:08:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:04.117 06:08:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:16:04.117 06:08:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.117 06:08:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:04.117 Malloc2 00:16:04.117 06:08:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.117 06:08:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:16:04.117 06:08:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.117 06:08:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:04.117 06:08:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.117 06:08:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:16:04.117 06:08:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.117 06:08:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:04.117 06:08:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.117 06:08:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:16:04.117 06:08:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.117 06:08:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:04.117 06:08:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.117 06:08:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:04.117 06:08:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:16:04.117 06:08:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.117 06:08:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:04.117 Malloc3 00:16:04.117 06:08:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.117 06:08:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:16:04.117 06:08:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.117 06:08:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:04.117 06:08:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.117 06:08:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:16:04.117 06:08:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.377 06:08:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:04.377 06:08:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.377 06:08:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.3 -s 4420 00:16:04.377 06:08:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.377 06:08:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:04.377 06:08:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.377 06:08:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:04.377 06:08:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:16:04.377 06:08:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.377 06:08:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:04.377 Malloc4 00:16:04.377 06:08:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.377 06:08:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:16:04.377 06:08:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.377 06:08:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:04.377 06:08:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.377 06:08:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:16:04.377 06:08:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.377 06:08:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:04.377 06:08:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.377 06:08:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.3 -s 4420 00:16:04.377 06:08:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.378 06:08:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:04.378 06:08:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.378 06:08:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:04.378 06:08:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:16:04.378 06:08:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.378 06:08:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:04.378 Malloc5 00:16:04.378 06:08:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.378 06:08:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:16:04.378 06:08:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.378 06:08:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:04.378 06:08:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.378 06:08:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:16:04.378 06:08:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.378 06:08:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:04.378 06:08:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.378 06:08:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.3 -s 4420 00:16:04.378 06:08:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.378 06:08:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:04.378 06:08:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.378 06:08:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:04.378 06:08:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:16:04.378 06:08:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.378 06:08:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:04.378 Malloc6 00:16:04.378 06:08:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.378 06:08:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:16:04.378 06:08:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.378 06:08:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:04.378 06:08:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.378 06:08:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:16:04.378 06:08:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.378 06:08:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:04.378 06:08:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.378 06:08:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.3 -s 4420 00:16:04.378 06:08:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.378 06:08:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:04.378 06:08:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.378 06:08:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:04.378 06:08:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:16:04.378 06:08:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.378 06:08:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:04.378 Malloc7 00:16:04.378 06:08:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.378 06:08:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:16:04.378 06:08:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.378 06:08:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:04.378 06:08:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.378 06:08:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:16:04.378 06:08:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.378 06:08:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:04.378 06:08:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.378 06:08:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.3 -s 4420 00:16:04.378 06:08:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.378 06:08:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:04.378 06:08:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.378 06:08:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:04.378 06:08:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:16:04.378 06:08:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.378 06:08:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:04.378 Malloc8 00:16:04.378 06:08:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.378 06:08:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:16:04.378 06:08:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.378 06:08:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:04.378 06:08:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.378 06:08:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:16:04.378 06:08:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.378 06:08:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:04.378 06:08:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.378 06:08:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.3 -s 4420 00:16:04.378 06:08:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.378 06:08:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:04.378 06:08:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.378 06:08:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:04.378 06:08:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:16:04.378 06:08:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.378 06:08:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:04.378 Malloc9 00:16:04.378 06:08:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.378 06:08:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:16:04.378 06:08:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.378 06:08:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:04.378 06:08:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.378 06:08:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:16:04.378 06:08:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.378 06:08:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:04.378 06:08:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.378 06:08:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.3 -s 4420 00:16:04.378 06:08:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.378 06:08:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:04.637 06:08:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.637 06:08:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:04.637 06:08:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:16:04.637 06:08:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.637 06:08:29 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:04.637 Malloc10 00:16:04.637 06:08:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.637 06:08:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:16:04.637 06:08:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.637 06:08:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:04.637 06:08:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.637 06:08:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:16:04.637 06:08:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.637 06:08:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:04.637 06:08:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.637 06:08:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.3 -s 4420 00:16:04.638 06:08:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.638 06:08:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:04.638 06:08:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.638 06:08:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:04.638 06:08:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:16:04.638 06:08:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.638 06:08:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:04.638 Malloc11 00:16:04.638 06:08:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.638 06:08:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:16:04.638 06:08:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.638 06:08:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:04.638 06:08:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.638 06:08:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:16:04.638 06:08:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.638 06:08:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:04.638 06:08:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.638 06:08:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.3 -s 4420 00:16:04.638 06:08:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.638 06:08:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:04.638 06:08:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.638 06:08:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # seq 1 11 00:16:04.638 06:08:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:04.638 06:08:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 --hostid=a979a798-a221-4879-b3c4-5aaa753fde06 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:16:04.638 06:08:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:16:04.638 06:08:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:16:04.638 06:08:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:04.638 06:08:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:16:04.638 06:08:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:16:07.173 06:08:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:07.173 06:08:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:07.173 06:08:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK1 00:16:07.173 06:08:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:16:07.173 06:08:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:07.173 06:08:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:16:07.173 06:08:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:07.173 06:08:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 --hostid=a979a798-a221-4879-b3c4-5aaa753fde06 -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.3 -s 4420 00:16:07.173 06:08:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:16:07.173 06:08:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:16:07.173 06:08:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:07.173 06:08:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:16:07.173 06:08:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:16:09.077 06:08:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:09.077 06:08:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:09.077 06:08:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK2 00:16:09.077 06:08:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:16:09.077 06:08:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:09.077 06:08:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:16:09.077 06:08:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:09.077 06:08:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 --hostid=a979a798-a221-4879-b3c4-5aaa753fde06 -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.3 -s 4420 00:16:09.077 06:08:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:16:09.077 06:08:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:16:09.077 06:08:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:09.077 06:08:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:16:09.077 06:08:34 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:16:10.980 06:08:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:10.980 06:08:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:10.980 06:08:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK3 00:16:10.980 06:08:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:16:10.980 06:08:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:10.980 06:08:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:16:10.980 06:08:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:10.980 06:08:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 --hostid=a979a798-a221-4879-b3c4-5aaa753fde06 -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.3 -s 4420 00:16:11.239 06:08:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:16:11.239 06:08:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:16:11.239 06:08:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:11.239 06:08:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:16:11.239 06:08:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:16:13.142 06:08:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:13.142 06:08:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:13.142 06:08:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK4 00:16:13.142 06:08:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:16:13.142 06:08:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:13.142 06:08:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:16:13.142 06:08:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:13.142 06:08:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 --hostid=a979a798-a221-4879-b3c4-5aaa753fde06 -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.3 -s 4420 00:16:13.401 06:08:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:16:13.401 06:08:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:16:13.401 06:08:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:13.401 06:08:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:16:13.401 06:08:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:16:15.304 06:08:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:15.304 06:08:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:15.304 06:08:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK5 00:16:15.304 06:08:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:16:15.304 06:08:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:15.304 06:08:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:16:15.304 06:08:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:15.304 06:08:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 --hostid=a979a798-a221-4879-b3c4-5aaa753fde06 -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.3 -s 4420 00:16:15.564 06:08:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:16:15.564 06:08:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:16:15.564 06:08:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:15.564 06:08:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:16:15.564 06:08:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:16:17.469 06:08:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:17.469 06:08:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:17.469 06:08:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK6 00:16:17.469 06:08:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:16:17.469 06:08:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:17.469 06:08:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:16:17.469 06:08:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:17.469 06:08:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 --hostid=a979a798-a221-4879-b3c4-5aaa753fde06 -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.3 -s 4420 00:16:17.727 06:08:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:16:17.727 06:08:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:16:17.727 06:08:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:17.727 06:08:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:16:17.727 06:08:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:16:19.663 06:08:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:19.663 06:08:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:19.663 06:08:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK7 00:16:19.663 06:08:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:16:19.663 06:08:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:19.663 06:08:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:16:19.663 06:08:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:19.663 06:08:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 --hostid=a979a798-a221-4879-b3c4-5aaa753fde06 -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.3 -s 4420 00:16:19.922 06:08:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:16:19.922 06:08:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:16:19.922 06:08:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:19.922 06:08:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:16:19.922 06:08:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:16:21.827 06:08:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:21.827 06:08:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:21.827 06:08:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK8 00:16:21.827 06:08:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:16:21.827 06:08:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:21.828 06:08:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:16:21.828 06:08:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:21.828 06:08:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 --hostid=a979a798-a221-4879-b3c4-5aaa753fde06 -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.3 -s 4420 00:16:22.117 06:08:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:16:22.117 06:08:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:16:22.117 06:08:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:22.117 06:08:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:16:22.117 06:08:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:16:24.020 06:08:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:24.020 06:08:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:24.020 06:08:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK9 00:16:24.020 06:08:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:16:24.020 06:08:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:24.020 06:08:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:16:24.020 06:08:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:24.020 06:08:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 --hostid=a979a798-a221-4879-b3c4-5aaa753fde06 -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.3 -s 4420 00:16:24.279 06:08:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:16:24.279 06:08:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:16:24.279 06:08:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:24.279 06:08:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:16:24.279 06:08:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:16:26.183 06:08:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:26.183 06:08:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:26.183 06:08:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK10 00:16:26.183 06:08:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:16:26.183 06:08:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:26.183 06:08:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:16:26.183 06:08:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:26.183 06:08:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 --hostid=a979a798-a221-4879-b3c4-5aaa753fde06 -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.3 -s 4420 00:16:26.442 06:08:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:16:26.442 06:08:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:16:26.442 06:08:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:26.442 06:08:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:16:26.442 06:08:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:16:28.345 06:08:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:28.345 06:08:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:28.345 06:08:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK11 00:16:28.345 06:08:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:16:28.345 06:08:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:28.345 06:08:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:16:28.345 06:08:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:16:28.345 [global] 00:16:28.345 thread=1 00:16:28.345 invalidate=1 00:16:28.345 rw=read 00:16:28.345 time_based=1 00:16:28.345 runtime=10 00:16:28.345 ioengine=libaio 00:16:28.345 direct=1 00:16:28.345 bs=262144 00:16:28.345 iodepth=64 00:16:28.345 norandommap=1 00:16:28.345 numjobs=1 00:16:28.345 00:16:28.345 [job0] 00:16:28.345 filename=/dev/nvme0n1 00:16:28.345 [job1] 00:16:28.345 filename=/dev/nvme10n1 00:16:28.345 [job2] 00:16:28.345 filename=/dev/nvme1n1 00:16:28.345 [job3] 00:16:28.345 filename=/dev/nvme2n1 00:16:28.345 [job4] 00:16:28.345 filename=/dev/nvme3n1 00:16:28.345 [job5] 00:16:28.345 filename=/dev/nvme4n1 00:16:28.345 [job6] 00:16:28.345 filename=/dev/nvme5n1 00:16:28.345 [job7] 00:16:28.345 filename=/dev/nvme6n1 00:16:28.345 [job8] 00:16:28.345 filename=/dev/nvme7n1 00:16:28.345 [job9] 00:16:28.345 filename=/dev/nvme8n1 00:16:28.345 [job10] 00:16:28.345 filename=/dev/nvme9n1 00:16:28.605 Could not set queue depth (nvme0n1) 00:16:28.605 Could not set queue depth (nvme10n1) 00:16:28.605 Could not set queue depth (nvme1n1) 00:16:28.605 Could not set queue depth (nvme2n1) 00:16:28.605 Could not set queue depth (nvme3n1) 00:16:28.605 Could not set queue depth (nvme4n1) 00:16:28.605 Could not set queue depth (nvme5n1) 00:16:28.605 Could not set queue depth (nvme6n1) 00:16:28.605 Could not set queue depth (nvme7n1) 00:16:28.605 Could not set queue depth (nvme8n1) 00:16:28.605 Could not set queue depth (nvme9n1) 00:16:28.605 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:28.605 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:28.605 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:28.605 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:28.605 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:28.605 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:28.605 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:28.605 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:28.605 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:28.605 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:28.605 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:28.605 fio-3.35 00:16:28.605 Starting 11 threads 00:16:40.852 00:16:40.852 job0: (groupid=0, jobs=1): err= 0: pid=86159: Tue Oct 1 06:09:04 2024 00:16:40.852 read: IOPS=139, BW=34.8MiB/s (36.5MB/s)(352MiB/10119msec) 00:16:40.852 slat (usec): min=15, max=309486, avg=7066.33, stdev=20638.19 00:16:40.852 clat (msec): min=24, max=772, avg=451.72, stdev=137.64 00:16:40.852 lat (msec): min=24, max=784, avg=458.79, stdev=139.66 00:16:40.852 clat percentiles (msec): 00:16:40.852 | 1.00th=[ 44], 5.00th=[ 300], 10.00th=[ 334], 20.00th=[ 355], 00:16:40.852 | 30.00th=[ 368], 40.00th=[ 380], 50.00th=[ 401], 60.00th=[ 498], 00:16:40.852 | 70.00th=[ 550], 80.00th=[ 600], 90.00th=[ 634], 95.00th=[ 667], 00:16:40.852 | 99.00th=[ 693], 99.50th=[ 726], 99.90th=[ 735], 99.95th=[ 776], 00:16:40.852 | 99.99th=[ 776] 00:16:40.852 bw ( KiB/s): min=19968, max=46592, per=5.14%, avg=34435.20, stdev=9044.26, samples=20 00:16:40.852 iops : min= 78, max= 182, avg=134.50, stdev=35.33, samples=20 00:16:40.852 lat (msec) : 50=1.14%, 100=1.35%, 250=1.92%, 500=56.85%, 750=38.68% 00:16:40.852 lat (msec) : 1000=0.07% 00:16:40.852 cpu : usr=0.10%, sys=0.64%, ctx=282, majf=0, minf=4097 00:16:40.852 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.6%, 16=1.1%, 32=2.3%, >=64=95.5% 00:16:40.852 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:40.852 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:40.852 issued rwts: total=1409,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:40.852 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:40.852 job1: (groupid=0, jobs=1): err= 0: pid=86160: Tue Oct 1 06:09:04 2024 00:16:40.852 read: IOPS=155, BW=38.9MiB/s (40.8MB/s)(394MiB/10121msec) 00:16:40.852 slat (usec): min=19, max=354347, avg=6348.67, stdev=18332.46 00:16:40.852 clat (msec): min=32, max=688, avg=404.29, stdev=79.82 00:16:40.852 lat (msec): min=33, max=824, avg=410.64, stdev=80.92 00:16:40.852 clat percentiles (msec): 00:16:40.852 | 1.00th=[ 234], 5.00th=[ 309], 10.00th=[ 330], 20.00th=[ 355], 00:16:40.852 | 30.00th=[ 372], 40.00th=[ 384], 50.00th=[ 388], 60.00th=[ 397], 00:16:40.852 | 70.00th=[ 409], 80.00th=[ 439], 90.00th=[ 518], 95.00th=[ 567], 00:16:40.852 | 99.00th=[ 642], 99.50th=[ 693], 99.90th=[ 693], 99.95th=[ 693], 00:16:40.852 | 99.99th=[ 693] 00:16:40.852 bw ( KiB/s): min= 7182, max=47104, per=5.77%, avg=38685.50, stdev=9071.49, samples=20 00:16:40.852 iops : min= 28, max= 184, avg=151.10, stdev=35.45, samples=20 00:16:40.852 lat (msec) : 50=0.06%, 250=1.33%, 500=86.21%, 750=12.39% 00:16:40.852 cpu : usr=0.11%, sys=0.72%, ctx=338, majf=0, minf=4097 00:16:40.852 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.5%, 16=1.0%, 32=2.0%, >=64=96.0% 00:16:40.852 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:40.852 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:40.852 issued rwts: total=1574,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:40.852 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:40.852 job2: (groupid=0, jobs=1): err= 0: pid=86161: Tue Oct 1 06:09:04 2024 00:16:40.852 read: IOPS=229, BW=57.3MiB/s (60.0MB/s)(582MiB/10165msec) 00:16:40.852 slat (usec): min=19, max=113321, avg=4295.19, stdev=11949.56 00:16:40.852 clat (msec): min=28, max=729, avg=274.55, stdev=173.17 00:16:40.852 lat (msec): min=28, max=730, avg=278.84, stdev=175.82 00:16:40.852 clat percentiles (msec): 00:16:40.852 | 1.00th=[ 36], 5.00th=[ 94], 10.00th=[ 112], 20.00th=[ 124], 00:16:40.852 | 30.00th=[ 131], 40.00th=[ 142], 50.00th=[ 266], 60.00th=[ 309], 00:16:40.852 | 70.00th=[ 334], 80.00th=[ 443], 90.00th=[ 575], 95.00th=[ 600], 00:16:40.852 | 99.00th=[ 634], 99.50th=[ 709], 99.90th=[ 735], 99.95th=[ 735], 00:16:40.852 | 99.99th=[ 735] 00:16:40.852 bw ( KiB/s): min=25088, max=133120, per=8.65%, avg=57980.95, stdev=39068.17, samples=20 00:16:40.852 iops : min= 98, max= 520, avg=226.45, stdev=152.64, samples=20 00:16:40.852 lat (msec) : 50=1.76%, 100=3.82%, 250=44.07%, 500=32.39%, 750=17.96% 00:16:40.852 cpu : usr=0.18%, sys=0.94%, ctx=485, majf=0, minf=4097 00:16:40.852 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.7%, 32=1.4%, >=64=97.3% 00:16:40.852 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:40.852 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:40.852 issued rwts: total=2328,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:40.852 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:40.852 job3: (groupid=0, jobs=1): err= 0: pid=86162: Tue Oct 1 06:09:04 2024 00:16:40.852 read: IOPS=116, BW=29.2MiB/s (30.6MB/s)(297MiB/10170msec) 00:16:40.852 slat (usec): min=16, max=234654, avg=8060.35, stdev=22852.12 00:16:40.852 clat (msec): min=16, max=743, avg=539.38, stdev=137.40 00:16:40.852 lat (msec): min=16, max=755, avg=547.44, stdev=139.07 00:16:40.852 clat percentiles (msec): 00:16:40.852 | 1.00th=[ 47], 5.00th=[ 236], 10.00th=[ 330], 20.00th=[ 485], 00:16:40.852 | 30.00th=[ 523], 40.00th=[ 542], 50.00th=[ 567], 60.00th=[ 592], 00:16:40.852 | 70.00th=[ 617], 80.00th=[ 642], 90.00th=[ 659], 95.00th=[ 684], 00:16:40.852 | 99.00th=[ 726], 99.50th=[ 726], 99.90th=[ 735], 99.95th=[ 743], 00:16:40.852 | 99.99th=[ 743] 00:16:40.852 bw ( KiB/s): min=18944, max=43008, per=4.29%, avg=28746.55, stdev=5992.45, samples=20 00:16:40.852 iops : min= 74, max= 168, avg=112.25, stdev=23.45, samples=20 00:16:40.852 lat (msec) : 20=0.17%, 50=1.26%, 100=1.18%, 250=3.29%, 500=18.28% 00:16:40.852 lat (msec) : 750=75.82% 00:16:40.852 cpu : usr=0.06%, sys=0.54%, ctx=240, majf=0, minf=4097 00:16:40.852 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.7%, 16=1.3%, 32=2.7%, >=64=94.7% 00:16:40.852 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:40.852 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:40.852 issued rwts: total=1187,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:40.852 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:40.852 job4: (groupid=0, jobs=1): err= 0: pid=86163: Tue Oct 1 06:09:04 2024 00:16:40.852 read: IOPS=151, BW=37.9MiB/s (39.7MB/s)(384MiB/10126msec) 00:16:40.852 slat (usec): min=20, max=279284, avg=6527.68, stdev=19944.64 00:16:40.852 clat (msec): min=16, max=852, avg=415.31, stdev=100.26 00:16:40.852 lat (msec): min=17, max=854, avg=421.84, stdev=101.28 00:16:40.852 clat percentiles (msec): 00:16:40.852 | 1.00th=[ 207], 5.00th=[ 300], 10.00th=[ 317], 20.00th=[ 347], 00:16:40.852 | 30.00th=[ 372], 40.00th=[ 388], 50.00th=[ 401], 60.00th=[ 418], 00:16:40.852 | 70.00th=[ 435], 80.00th=[ 451], 90.00th=[ 575], 95.00th=[ 667], 00:16:40.852 | 99.00th=[ 693], 99.50th=[ 693], 99.90th=[ 726], 99.95th=[ 852], 00:16:40.852 | 99.99th=[ 852] 00:16:40.852 bw ( KiB/s): min=15360, max=45056, per=5.62%, avg=37652.65, stdev=7635.23, samples=20 00:16:40.852 iops : min= 60, max= 176, avg=147.00, stdev=29.78, samples=20 00:16:40.852 lat (msec) : 20=0.07%, 50=0.26%, 250=1.37%, 500=85.46%, 750=12.78% 00:16:40.852 lat (msec) : 1000=0.07% 00:16:40.852 cpu : usr=0.06%, sys=0.71%, ctx=293, majf=0, minf=4097 00:16:40.852 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.5%, 16=1.0%, 32=2.1%, >=64=95.9% 00:16:40.852 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:40.852 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:40.852 issued rwts: total=1534,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:40.852 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:40.852 job5: (groupid=0, jobs=1): err= 0: pid=86164: Tue Oct 1 06:09:04 2024 00:16:40.852 read: IOPS=136, BW=34.1MiB/s (35.7MB/s)(345MiB/10121msec) 00:16:40.852 slat (usec): min=15, max=228371, avg=7257.18, stdev=20748.17 00:16:40.852 clat (msec): min=98, max=837, avg=461.43, stdev=152.60 00:16:40.853 lat (msec): min=107, max=837, avg=468.68, stdev=154.51 00:16:40.853 clat percentiles (msec): 00:16:40.853 | 1.00th=[ 138], 5.00th=[ 234], 10.00th=[ 313], 20.00th=[ 351], 00:16:40.853 | 30.00th=[ 368], 40.00th=[ 376], 50.00th=[ 397], 60.00th=[ 489], 00:16:40.853 | 70.00th=[ 558], 80.00th=[ 609], 90.00th=[ 693], 95.00th=[ 735], 00:16:40.853 | 99.00th=[ 768], 99.50th=[ 810], 99.90th=[ 835], 99.95th=[ 835], 00:16:40.853 | 99.99th=[ 835] 00:16:40.853 bw ( KiB/s): min=13824, max=47104, per=5.03%, avg=33710.70, stdev=9855.45, samples=20 00:16:40.853 iops : min= 54, max= 184, avg=131.60, stdev=38.46, samples=20 00:16:40.853 lat (msec) : 100=0.07%, 250=4.93%, 500=56.38%, 750=35.29%, 1000=3.33% 00:16:40.853 cpu : usr=0.03%, sys=0.63%, ctx=267, majf=0, minf=4097 00:16:40.853 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.6%, 16=1.2%, 32=2.3%, >=64=95.4% 00:16:40.853 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:40.853 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:40.853 issued rwts: total=1380,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:40.853 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:40.853 job6: (groupid=0, jobs=1): err= 0: pid=86165: Tue Oct 1 06:09:04 2024 00:16:40.853 read: IOPS=137, BW=34.3MiB/s (36.0MB/s)(347MiB/10126msec) 00:16:40.853 slat (usec): min=20, max=232100, avg=7192.63, stdev=20218.66 00:16:40.853 clat (msec): min=32, max=761, avg=458.12, stdev=131.60 00:16:40.853 lat (msec): min=32, max=775, avg=465.32, stdev=133.38 00:16:40.853 clat percentiles (msec): 00:16:40.853 | 1.00th=[ 41], 5.00th=[ 317], 10.00th=[ 338], 20.00th=[ 359], 00:16:40.853 | 30.00th=[ 368], 40.00th=[ 380], 50.00th=[ 397], 60.00th=[ 498], 00:16:40.853 | 70.00th=[ 542], 80.00th=[ 592], 90.00th=[ 642], 95.00th=[ 684], 00:16:40.853 | 99.00th=[ 718], 99.50th=[ 718], 99.90th=[ 726], 99.95th=[ 760], 00:16:40.853 | 99.99th=[ 760] 00:16:40.853 bw ( KiB/s): min=20480, max=48640, per=5.06%, avg=33943.95, stdev=9057.69, samples=20 00:16:40.853 iops : min= 80, max= 190, avg=132.55, stdev=35.35, samples=20 00:16:40.853 lat (msec) : 50=1.01%, 250=1.22%, 500=57.81%, 750=39.88%, 1000=0.07% 00:16:40.853 cpu : usr=0.10%, sys=0.67%, ctx=290, majf=0, minf=4097 00:16:40.853 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.6%, 16=1.2%, 32=2.3%, >=64=95.5% 00:16:40.853 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:40.853 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:40.853 issued rwts: total=1389,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:40.853 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:40.853 job7: (groupid=0, jobs=1): err= 0: pid=86166: Tue Oct 1 06:09:04 2024 00:16:40.853 read: IOPS=1034, BW=259MiB/s (271MB/s)(2592MiB/10022msec) 00:16:40.853 slat (usec): min=16, max=24714, avg=958.62, stdev=1879.16 00:16:40.853 clat (msec): min=21, max=102, avg=60.82, stdev= 4.52 00:16:40.853 lat (msec): min=23, max=103, avg=61.78, stdev= 4.56 00:16:40.853 clat percentiles (usec): 00:16:40.853 | 1.00th=[51643], 5.00th=[54789], 10.00th=[56361], 20.00th=[57934], 00:16:40.853 | 30.00th=[58983], 40.00th=[60031], 50.00th=[61080], 60.00th=[61604], 00:16:40.853 | 70.00th=[62653], 80.00th=[63701], 90.00th=[65274], 95.00th=[66847], 00:16:40.853 | 99.00th=[70779], 99.50th=[79168], 99.90th=[98042], 99.95th=[99091], 00:16:40.853 | 99.99th=[99091] 00:16:40.853 bw ( KiB/s): min=243174, max=276480, per=39.36%, avg=263832.30, stdev=7365.74, samples=20 00:16:40.853 iops : min= 949, max= 1080, avg=1030.55, stdev=28.91, samples=20 00:16:40.853 lat (msec) : 50=0.53%, 100=99.46%, 250=0.01% 00:16:40.853 cpu : usr=0.62%, sys=4.16%, ctx=2363, majf=0, minf=4097 00:16:40.853 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:16:40.853 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:40.853 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:40.853 issued rwts: total=10368,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:40.853 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:40.853 job8: (groupid=0, jobs=1): err= 0: pid=86167: Tue Oct 1 06:09:04 2024 00:16:40.853 read: IOPS=153, BW=38.3MiB/s (40.1MB/s)(389MiB/10156msec) 00:16:40.853 slat (usec): min=19, max=260175, avg=6141.23, stdev=18335.83 00:16:40.853 clat (msec): min=39, max=764, avg=411.41, stdev=149.67 00:16:40.853 lat (msec): min=39, max=764, avg=417.56, stdev=152.18 00:16:40.853 clat percentiles (msec): 00:16:40.853 | 1.00th=[ 65], 5.00th=[ 130], 10.00th=[ 232], 20.00th=[ 300], 00:16:40.853 | 30.00th=[ 321], 40.00th=[ 334], 50.00th=[ 368], 60.00th=[ 481], 00:16:40.853 | 70.00th=[ 518], 80.00th=[ 567], 90.00th=[ 600], 95.00th=[ 634], 00:16:40.853 | 99.00th=[ 667], 99.50th=[ 684], 99.90th=[ 743], 99.95th=[ 768], 00:16:40.853 | 99.99th=[ 768] 00:16:40.853 bw ( KiB/s): min=25600, max=69120, per=5.69%, avg=38169.60, stdev=12074.47, samples=20 00:16:40.853 iops : min= 100, max= 270, avg=149.10, stdev=47.17, samples=20 00:16:40.853 lat (msec) : 50=0.26%, 100=3.22%, 250=7.65%, 500=53.50%, 750=35.31% 00:16:40.853 lat (msec) : 1000=0.06% 00:16:40.853 cpu : usr=0.09%, sys=0.71%, ctx=359, majf=0, minf=4097 00:16:40.853 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.5%, 16=1.0%, 32=2.1%, >=64=95.9% 00:16:40.853 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:40.853 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:40.853 issued rwts: total=1555,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:40.853 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:40.853 job9: (groupid=0, jobs=1): err= 0: pid=86168: Tue Oct 1 06:09:04 2024 00:16:40.853 read: IOPS=228, BW=57.1MiB/s (59.9MB/s)(581MiB/10167msec) 00:16:40.853 slat (usec): min=20, max=144091, avg=4299.06, stdev=12726.96 00:16:40.853 clat (msec): min=15, max=706, avg=275.35, stdev=174.46 00:16:40.853 lat (msec): min=16, max=707, avg=279.65, stdev=177.06 00:16:40.853 clat percentiles (msec): 00:16:40.853 | 1.00th=[ 64], 5.00th=[ 95], 10.00th=[ 111], 20.00th=[ 123], 00:16:40.853 | 30.00th=[ 130], 40.00th=[ 142], 50.00th=[ 257], 60.00th=[ 309], 00:16:40.853 | 70.00th=[ 334], 80.00th=[ 430], 90.00th=[ 575], 95.00th=[ 609], 00:16:40.853 | 99.00th=[ 667], 99.50th=[ 676], 99.90th=[ 709], 99.95th=[ 709], 00:16:40.853 | 99.99th=[ 709] 00:16:40.853 bw ( KiB/s): min=24064, max=132096, per=8.63%, avg=57878.50, stdev=38722.52, samples=20 00:16:40.853 iops : min= 94, max= 516, avg=226.05, stdev=151.29, samples=20 00:16:40.853 lat (msec) : 20=0.04%, 50=0.09%, 100=6.97%, 250=42.69%, 500=32.36% 00:16:40.853 lat (msec) : 750=17.86% 00:16:40.853 cpu : usr=0.13%, sys=1.05%, ctx=447, majf=0, minf=4097 00:16:40.853 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.7%, 32=1.4%, >=64=97.3% 00:16:40.853 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:40.853 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:40.853 issued rwts: total=2324,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:40.853 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:40.853 job10: (groupid=0, jobs=1): err= 0: pid=86169: Tue Oct 1 06:09:04 2024 00:16:40.853 read: IOPS=156, BW=39.1MiB/s (41.0MB/s)(396MiB/10132msec) 00:16:40.853 slat (usec): min=17, max=298811, avg=6329.31, stdev=18629.21 00:16:40.853 clat (msec): min=15, max=724, avg=402.66, stdev=108.50 00:16:40.853 lat (msec): min=16, max=844, avg=408.99, stdev=109.80 00:16:40.853 clat percentiles (msec): 00:16:40.853 | 1.00th=[ 124], 5.00th=[ 279], 10.00th=[ 321], 20.00th=[ 347], 00:16:40.853 | 30.00th=[ 363], 40.00th=[ 380], 50.00th=[ 393], 60.00th=[ 401], 00:16:40.853 | 70.00th=[ 414], 80.00th=[ 430], 90.00th=[ 575], 95.00th=[ 659], 00:16:40.853 | 99.00th=[ 709], 99.50th=[ 726], 99.90th=[ 726], 99.95th=[ 726], 00:16:40.853 | 99.99th=[ 726] 00:16:40.853 bw ( KiB/s): min=14336, max=47616, per=5.80%, avg=38881.95, stdev=7715.97, samples=20 00:16:40.853 iops : min= 56, max= 186, avg=151.85, stdev=30.12, samples=20 00:16:40.853 lat (msec) : 20=0.13%, 50=0.32%, 250=4.36%, 500=83.26%, 750=11.94% 00:16:40.853 cpu : usr=0.08%, sys=0.73%, ctx=312, majf=0, minf=4097 00:16:40.853 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.5%, 16=1.0%, 32=2.0%, >=64=96.0% 00:16:40.853 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:40.853 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:40.853 issued rwts: total=1583,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:40.853 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:40.853 00:16:40.853 Run status group 0 (all jobs): 00:16:40.853 READ: bw=655MiB/s (686MB/s), 29.2MiB/s-259MiB/s (30.6MB/s-271MB/s), io=6658MiB (6981MB), run=10022-10170msec 00:16:40.853 00:16:40.853 Disk stats (read/write): 00:16:40.853 nvme0n1: ios=2690/0, merge=0/0, ticks=1224185/0, in_queue=1224185, util=97.79% 00:16:40.853 nvme10n1: ios=3021/0, merge=0/0, ticks=1219412/0, in_queue=1219412, util=97.84% 00:16:40.853 nvme1n1: ios=4532/0, merge=0/0, ticks=1205224/0, in_queue=1205224, util=98.14% 00:16:40.853 nvme2n1: ios=2246/0, merge=0/0, ticks=1211124/0, in_queue=1211124, util=98.31% 00:16:40.853 nvme3n1: ios=2941/0, merge=0/0, ticks=1217876/0, in_queue=1217876, util=98.33% 00:16:40.853 nvme4n1: ios=2632/0, merge=0/0, ticks=1220227/0, in_queue=1220227, util=98.49% 00:16:40.853 nvme5n1: ios=2658/0, merge=0/0, ticks=1221323/0, in_queue=1221323, util=98.61% 00:16:40.853 nvme6n1: ios=20608/0, merge=0/0, ticks=1238492/0, in_queue=1238492, util=98.58% 00:16:40.853 nvme7n1: ios=2982/0, merge=0/0, ticks=1211355/0, in_queue=1211355, util=98.91% 00:16:40.853 nvme8n1: ios=4524/0, merge=0/0, ticks=1211267/0, in_queue=1211267, util=99.21% 00:16:40.853 nvme9n1: ios=3045/0, merge=0/0, ticks=1223586/0, in_queue=1223586, util=99.24% 00:16:40.853 06:09:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:16:40.853 [global] 00:16:40.853 thread=1 00:16:40.853 invalidate=1 00:16:40.853 rw=randwrite 00:16:40.853 time_based=1 00:16:40.853 runtime=10 00:16:40.853 ioengine=libaio 00:16:40.853 direct=1 00:16:40.853 bs=262144 00:16:40.853 iodepth=64 00:16:40.853 norandommap=1 00:16:40.853 numjobs=1 00:16:40.853 00:16:40.853 [job0] 00:16:40.853 filename=/dev/nvme0n1 00:16:40.853 [job1] 00:16:40.854 filename=/dev/nvme10n1 00:16:40.854 [job2] 00:16:40.854 filename=/dev/nvme1n1 00:16:40.854 [job3] 00:16:40.854 filename=/dev/nvme2n1 00:16:40.854 [job4] 00:16:40.854 filename=/dev/nvme3n1 00:16:40.854 [job5] 00:16:40.854 filename=/dev/nvme4n1 00:16:40.854 [job6] 00:16:40.854 filename=/dev/nvme5n1 00:16:40.854 [job7] 00:16:40.854 filename=/dev/nvme6n1 00:16:40.854 [job8] 00:16:40.854 filename=/dev/nvme7n1 00:16:40.854 [job9] 00:16:40.854 filename=/dev/nvme8n1 00:16:40.854 [job10] 00:16:40.854 filename=/dev/nvme9n1 00:16:40.854 Could not set queue depth (nvme0n1) 00:16:40.854 Could not set queue depth (nvme10n1) 00:16:40.854 Could not set queue depth (nvme1n1) 00:16:40.854 Could not set queue depth (nvme2n1) 00:16:40.854 Could not set queue depth (nvme3n1) 00:16:40.854 Could not set queue depth (nvme4n1) 00:16:40.854 Could not set queue depth (nvme5n1) 00:16:40.854 Could not set queue depth (nvme6n1) 00:16:40.854 Could not set queue depth (nvme7n1) 00:16:40.854 Could not set queue depth (nvme8n1) 00:16:40.854 Could not set queue depth (nvme9n1) 00:16:40.854 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:40.854 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:40.854 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:40.854 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:40.854 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:40.854 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:40.854 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:40.854 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:40.854 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:40.854 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:40.854 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:16:40.854 fio-3.35 00:16:40.854 Starting 11 threads 00:16:50.836 00:16:50.836 job0: (groupid=0, jobs=1): err= 0: pid=86364: Tue Oct 1 06:09:15 2024 00:16:50.836 write: IOPS=206, BW=51.7MiB/s (54.2MB/s)(527MiB/10187msec); 0 zone resets 00:16:50.836 slat (usec): min=17, max=95742, avg=4625.44, stdev=8533.78 00:16:50.836 clat (msec): min=12, max=503, avg=304.66, stdev=45.07 00:16:50.836 lat (msec): min=12, max=503, avg=309.28, stdev=45.33 00:16:50.836 clat percentiles (msec): 00:16:50.836 | 1.00th=[ 95], 5.00th=[ 213], 10.00th=[ 279], 20.00th=[ 300], 00:16:50.836 | 30.00th=[ 305], 40.00th=[ 309], 50.00th=[ 317], 60.00th=[ 321], 00:16:50.836 | 70.00th=[ 321], 80.00th=[ 326], 90.00th=[ 326], 95.00th=[ 330], 00:16:50.836 | 99.00th=[ 401], 99.50th=[ 443], 99.90th=[ 485], 99.95th=[ 506], 00:16:50.836 | 99.99th=[ 506] 00:16:50.836 bw ( KiB/s): min=49152, max=68608, per=4.83%, avg=52326.80, stdev=4230.42, samples=20 00:16:50.836 iops : min= 192, max= 268, avg=204.35, stdev=16.52, samples=20 00:16:50.836 lat (msec) : 20=0.19%, 50=0.38%, 100=0.57%, 250=5.65%, 500=93.12% 00:16:50.836 lat (msec) : 750=0.09% 00:16:50.836 cpu : usr=0.39%, sys=0.50%, ctx=1937, majf=0, minf=1 00:16:50.836 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=1.5%, >=64=97.0% 00:16:50.836 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:50.836 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:50.836 issued rwts: total=0,2107,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:50.836 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:50.836 job1: (groupid=0, jobs=1): err= 0: pid=86365: Tue Oct 1 06:09:15 2024 00:16:50.836 write: IOPS=363, BW=90.9MiB/s (95.3MB/s)(920MiB/10129msec); 0 zone resets 00:16:50.836 slat (usec): min=17, max=60691, avg=2711.86, stdev=4820.96 00:16:50.836 clat (msec): min=55, max=309, avg=173.33, stdev=21.97 00:16:50.836 lat (msec): min=55, max=309, avg=176.04, stdev=21.78 00:16:50.836 clat percentiles (msec): 00:16:50.836 | 1.00th=[ 148], 5.00th=[ 161], 10.00th=[ 161], 20.00th=[ 163], 00:16:50.836 | 30.00th=[ 169], 40.00th=[ 171], 50.00th=[ 171], 60.00th=[ 174], 00:16:50.836 | 70.00th=[ 174], 80.00th=[ 174], 90.00th=[ 176], 95.00th=[ 201], 00:16:50.836 | 99.00th=[ 288], 99.50th=[ 300], 99.90th=[ 309], 99.95th=[ 309], 00:16:50.836 | 99.99th=[ 309] 00:16:50.836 bw ( KiB/s): min=53248, max=96256, per=8.55%, avg=92611.35, stdev=9553.87, samples=20 00:16:50.836 iops : min= 208, max= 376, avg=361.75, stdev=37.32, samples=20 00:16:50.836 lat (msec) : 100=0.33%, 250=96.88%, 500=2.80% 00:16:50.836 cpu : usr=0.59%, sys=1.21%, ctx=5103, majf=0, minf=1 00:16:50.836 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.3% 00:16:50.836 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:50.836 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:50.836 issued rwts: total=0,3681,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:50.837 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:50.837 job2: (groupid=0, jobs=1): err= 0: pid=86377: Tue Oct 1 06:09:15 2024 00:16:50.837 write: IOPS=213, BW=53.3MiB/s (55.9MB/s)(543MiB/10185msec); 0 zone resets 00:16:50.837 slat (usec): min=20, max=39676, avg=4599.15, stdev=8181.11 00:16:50.837 clat (msec): min=10, max=503, avg=295.36, stdev=56.12 00:16:50.837 lat (msec): min=10, max=503, avg=299.96, stdev=56.49 00:16:50.837 clat percentiles (msec): 00:16:50.837 | 1.00th=[ 40], 5.00th=[ 150], 10.00th=[ 249], 20.00th=[ 292], 00:16:50.837 | 30.00th=[ 300], 40.00th=[ 305], 50.00th=[ 313], 60.00th=[ 317], 00:16:50.837 | 70.00th=[ 317], 80.00th=[ 321], 90.00th=[ 321], 95.00th=[ 326], 00:16:50.837 | 99.00th=[ 401], 99.50th=[ 443], 99.90th=[ 485], 99.95th=[ 502], 00:16:50.837 | 99.99th=[ 502] 00:16:50.837 bw ( KiB/s): min=49250, max=90112, per=4.98%, avg=53984.85, stdev=8678.33, samples=20 00:16:50.837 iops : min= 192, max= 352, avg=210.80, stdev=33.93, samples=20 00:16:50.837 lat (msec) : 20=0.37%, 50=0.92%, 100=0.92%, 250=7.97%, 500=89.73% 00:16:50.837 lat (msec) : 750=0.09% 00:16:50.837 cpu : usr=0.42%, sys=0.68%, ctx=1642, majf=0, minf=1 00:16:50.837 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.7%, 32=1.5%, >=64=97.1% 00:16:50.837 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:50.837 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:50.837 issued rwts: total=0,2172,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:50.837 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:50.837 job3: (groupid=0, jobs=1): err= 0: pid=86378: Tue Oct 1 06:09:15 2024 00:16:50.837 write: IOPS=416, BW=104MiB/s (109MB/s)(1054MiB/10131msec); 0 zone resets 00:16:50.837 slat (usec): min=18, max=158052, avg=2367.34, stdev=4668.41 00:16:50.837 clat (msec): min=123, max=342, avg=151.35, stdev=17.52 00:16:50.837 lat (msec): min=132, max=342, avg=153.72, stdev=17.12 00:16:50.837 clat percentiles (msec): 00:16:50.837 | 1.00th=[ 138], 5.00th=[ 140], 10.00th=[ 142], 20.00th=[ 144], 00:16:50.837 | 30.00th=[ 148], 40.00th=[ 150], 50.00th=[ 150], 60.00th=[ 153], 00:16:50.837 | 70.00th=[ 153], 80.00th=[ 153], 90.00th=[ 155], 95.00th=[ 155], 00:16:50.837 | 99.00th=[ 255], 99.50th=[ 288], 99.90th=[ 334], 99.95th=[ 342], 00:16:50.837 | 99.99th=[ 342] 00:16:50.837 bw ( KiB/s): min=67584, max=110592, per=9.81%, avg=106295.10, stdev=9218.62, samples=20 00:16:50.837 iops : min= 264, max= 432, avg=415.20, stdev=36.01, samples=20 00:16:50.837 lat (msec) : 250=98.93%, 500=1.07% 00:16:50.837 cpu : usr=0.75%, sys=1.28%, ctx=5248, majf=0, minf=1 00:16:50.837 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:16:50.837 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:50.837 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:50.837 issued rwts: total=0,4216,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:50.837 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:50.837 job4: (groupid=0, jobs=1): err= 0: pid=86379: Tue Oct 1 06:09:15 2024 00:16:50.837 write: IOPS=1077, BW=269MiB/s (283MB/s)(2709MiB/10053msec); 0 zone resets 00:16:50.837 slat (usec): min=13, max=7813, avg=918.70, stdev=1542.03 00:16:50.837 clat (msec): min=10, max=109, avg=58.45, stdev= 3.85 00:16:50.837 lat (msec): min=10, max=109, avg=59.37, stdev= 3.64 00:16:50.837 clat percentiles (msec): 00:16:50.837 | 1.00th=[ 54], 5.00th=[ 55], 10.00th=[ 56], 20.00th=[ 57], 00:16:50.837 | 30.00th=[ 58], 40.00th=[ 59], 50.00th=[ 59], 60.00th=[ 59], 00:16:50.837 | 70.00th=[ 60], 80.00th=[ 61], 90.00th=[ 61], 95.00th=[ 62], 00:16:50.837 | 99.00th=[ 70], 99.50th=[ 74], 99.90th=[ 100], 99.95th=[ 107], 00:16:50.837 | 99.99th=[ 110] 00:16:50.837 bw ( KiB/s): min=257536, max=281088, per=25.46%, avg=275709.65, stdev=5448.58, samples=20 00:16:50.837 iops : min= 1006, max= 1098, avg=1076.90, stdev=21.25, samples=20 00:16:50.837 lat (msec) : 20=0.11%, 50=0.41%, 100=99.39%, 250=0.09% 00:16:50.837 cpu : usr=1.61%, sys=2.24%, ctx=11706, majf=0, minf=1 00:16:50.837 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:16:50.837 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:50.837 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:50.837 issued rwts: total=0,10834,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:50.837 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:50.837 job5: (groupid=0, jobs=1): err= 0: pid=86380: Tue Oct 1 06:09:15 2024 00:16:50.837 write: IOPS=419, BW=105MiB/s (110MB/s)(1064MiB/10133msec); 0 zone resets 00:16:50.837 slat (usec): min=17, max=23580, avg=2346.52, stdev=4049.36 00:16:50.837 clat (msec): min=25, max=277, avg=150.04, stdev=14.23 00:16:50.837 lat (msec): min=25, max=277, avg=152.38, stdev=13.84 00:16:50.837 clat percentiles (msec): 00:16:50.837 | 1.00th=[ 132], 5.00th=[ 140], 10.00th=[ 142], 20.00th=[ 144], 00:16:50.837 | 30.00th=[ 148], 40.00th=[ 150], 50.00th=[ 150], 60.00th=[ 153], 00:16:50.837 | 70.00th=[ 153], 80.00th=[ 153], 90.00th=[ 155], 95.00th=[ 155], 00:16:50.837 | 99.00th=[ 220], 99.50th=[ 230], 99.90th=[ 268], 99.95th=[ 271], 00:16:50.837 | 99.99th=[ 279] 00:16:50.837 bw ( KiB/s): min=86016, max=112640, per=9.91%, avg=107278.75, stdev=5246.50, samples=20 00:16:50.837 iops : min= 336, max= 440, avg=419.05, stdev=20.49, samples=20 00:16:50.837 lat (msec) : 50=0.26%, 100=0.28%, 250=99.22%, 500=0.24% 00:16:50.837 cpu : usr=0.67%, sys=1.33%, ctx=5024, majf=0, minf=1 00:16:50.837 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:16:50.837 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:50.837 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:50.837 issued rwts: total=0,4254,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:50.837 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:50.837 job6: (groupid=0, jobs=1): err= 0: pid=86381: Tue Oct 1 06:09:15 2024 00:16:50.837 write: IOPS=209, BW=52.5MiB/s (55.0MB/s)(534MiB/10177msec); 0 zone resets 00:16:50.837 slat (usec): min=16, max=55939, avg=4683.53, stdev=8356.52 00:16:50.837 clat (msec): min=42, max=480, avg=300.13, stdev=49.09 00:16:50.837 lat (msec): min=42, max=480, avg=304.81, stdev=49.26 00:16:50.837 clat percentiles (msec): 00:16:50.837 | 1.00th=[ 88], 5.00th=[ 180], 10.00th=[ 257], 20.00th=[ 296], 00:16:50.837 | 30.00th=[ 305], 40.00th=[ 313], 50.00th=[ 317], 60.00th=[ 317], 00:16:50.837 | 70.00th=[ 321], 80.00th=[ 321], 90.00th=[ 326], 95.00th=[ 330], 00:16:50.837 | 99.00th=[ 380], 99.50th=[ 422], 99.90th=[ 460], 99.95th=[ 481], 00:16:50.837 | 99.99th=[ 481] 00:16:50.837 bw ( KiB/s): min=49152, max=82084, per=4.90%, avg=53046.25, stdev=6907.78, samples=20 00:16:50.837 iops : min= 192, max= 320, avg=207.15, stdev=26.85, samples=20 00:16:50.837 lat (msec) : 50=0.19%, 100=1.12%, 250=8.47%, 500=90.22% 00:16:50.837 cpu : usr=0.45%, sys=0.65%, ctx=1748, majf=0, minf=1 00:16:50.837 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.7%, 32=1.5%, >=64=97.1% 00:16:50.837 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:50.837 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:50.837 issued rwts: total=0,2136,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:50.837 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:50.837 job7: (groupid=0, jobs=1): err= 0: pid=86382: Tue Oct 1 06:09:15 2024 00:16:50.837 write: IOPS=362, BW=90.7MiB/s (95.1MB/s)(919MiB/10130msec); 0 zone resets 00:16:50.837 slat (usec): min=18, max=53322, avg=2714.96, stdev=4806.13 00:16:50.837 clat (msec): min=4, max=307, avg=173.62, stdev=24.81 00:16:50.837 lat (msec): min=5, max=307, avg=176.34, stdev=24.69 00:16:50.837 clat percentiles (msec): 00:16:50.837 | 1.00th=[ 146], 5.00th=[ 161], 10.00th=[ 161], 20.00th=[ 163], 00:16:50.837 | 30.00th=[ 169], 40.00th=[ 171], 50.00th=[ 171], 60.00th=[ 174], 00:16:50.837 | 70.00th=[ 174], 80.00th=[ 174], 90.00th=[ 176], 95.00th=[ 207], 00:16:50.837 | 99.00th=[ 292], 99.50th=[ 300], 99.90th=[ 309], 99.95th=[ 309], 00:16:50.838 | 99.99th=[ 309] 00:16:50.838 bw ( KiB/s): min=56320, max=96768, per=8.54%, avg=92457.75, stdev=9235.75, samples=20 00:16:50.838 iops : min= 220, max= 378, avg=361.15, stdev=36.07, samples=20 00:16:50.838 lat (msec) : 10=0.03%, 50=0.22%, 100=0.33%, 250=95.76%, 500=3.67% 00:16:50.838 cpu : usr=0.77%, sys=1.03%, ctx=3837, majf=0, minf=1 00:16:50.838 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.3% 00:16:50.838 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:50.838 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:50.838 issued rwts: total=0,3675,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:50.838 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:50.838 job8: (groupid=0, jobs=1): err= 0: pid=86383: Tue Oct 1 06:09:15 2024 00:16:50.838 write: IOPS=360, BW=90.2MiB/s (94.6MB/s)(914MiB/10127msec); 0 zone resets 00:16:50.838 slat (usec): min=18, max=152839, avg=2730.72, stdev=5312.45 00:16:50.838 clat (msec): min=121, max=367, avg=174.47, stdev=23.81 00:16:50.838 lat (msec): min=133, max=367, avg=177.20, stdev=23.59 00:16:50.838 clat percentiles (msec): 00:16:50.838 | 1.00th=[ 159], 5.00th=[ 161], 10.00th=[ 161], 20.00th=[ 163], 00:16:50.838 | 30.00th=[ 169], 40.00th=[ 171], 50.00th=[ 171], 60.00th=[ 174], 00:16:50.838 | 70.00th=[ 174], 80.00th=[ 174], 90.00th=[ 176], 95.00th=[ 205], 00:16:50.838 | 99.00th=[ 296], 99.50th=[ 309], 99.90th=[ 355], 99.95th=[ 368], 00:16:50.838 | 99.99th=[ 368] 00:16:50.838 bw ( KiB/s): min=40960, max=96768, per=8.49%, avg=91945.55, stdev=12221.91, samples=20 00:16:50.838 iops : min= 160, max= 378, avg=359.15, stdev=47.74, samples=20 00:16:50.838 lat (msec) : 250=96.85%, 500=3.15% 00:16:50.838 cpu : usr=0.69%, sys=1.12%, ctx=4349, majf=0, minf=1 00:16:50.838 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.3% 00:16:50.838 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:50.838 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:50.838 issued rwts: total=0,3655,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:50.838 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:50.838 job9: (groupid=0, jobs=1): err= 0: pid=86384: Tue Oct 1 06:09:15 2024 00:16:50.838 write: IOPS=209, BW=52.4MiB/s (55.0MB/s)(534MiB/10188msec); 0 zone resets 00:16:50.838 slat (usec): min=18, max=41235, avg=4594.68, stdev=8382.83 00:16:50.838 clat (msec): min=24, max=499, avg=300.54, stdev=56.02 00:16:50.838 lat (msec): min=24, max=499, avg=305.13, stdev=56.57 00:16:50.838 clat percentiles (msec): 00:16:50.838 | 1.00th=[ 73], 5.00th=[ 169], 10.00th=[ 234], 20.00th=[ 300], 00:16:50.838 | 30.00th=[ 305], 40.00th=[ 317], 50.00th=[ 317], 60.00th=[ 321], 00:16:50.838 | 70.00th=[ 321], 80.00th=[ 326], 90.00th=[ 330], 95.00th=[ 338], 00:16:50.838 | 99.00th=[ 397], 99.50th=[ 439], 99.90th=[ 481], 99.95th=[ 502], 00:16:50.838 | 99.99th=[ 502] 00:16:50.838 bw ( KiB/s): min=49152, max=90292, per=4.90%, avg=53067.65, stdev=8957.64, samples=20 00:16:50.838 iops : min= 192, max= 352, avg=207.20, stdev=34.86, samples=20 00:16:50.838 lat (msec) : 50=0.56%, 100=1.97%, 250=8.24%, 500=89.23% 00:16:50.838 cpu : usr=0.31%, sys=0.53%, ctx=2083, majf=0, minf=1 00:16:50.838 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.7%, 32=1.5%, >=64=97.1% 00:16:50.838 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:50.838 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:50.838 issued rwts: total=0,2136,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:50.838 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:50.838 job10: (groupid=0, jobs=1): err= 0: pid=86385: Tue Oct 1 06:09:15 2024 00:16:50.838 write: IOPS=418, BW=105MiB/s (110MB/s)(1059MiB/10130msec); 0 zone resets 00:16:50.838 slat (usec): min=17, max=86708, avg=2357.65, stdev=4231.34 00:16:50.838 clat (msec): min=88, max=287, avg=150.65, stdev=13.73 00:16:50.838 lat (msec): min=88, max=287, avg=153.00, stdev=13.22 00:16:50.838 clat percentiles (msec): 00:16:50.838 | 1.00th=[ 138], 5.00th=[ 140], 10.00th=[ 142], 20.00th=[ 144], 00:16:50.838 | 30.00th=[ 148], 40.00th=[ 150], 50.00th=[ 150], 60.00th=[ 153], 00:16:50.838 | 70.00th=[ 153], 80.00th=[ 153], 90.00th=[ 155], 95.00th=[ 155], 00:16:50.838 | 99.00th=[ 222], 99.50th=[ 251], 99.90th=[ 279], 99.95th=[ 288], 00:16:50.838 | 99.99th=[ 288] 00:16:50.838 bw ( KiB/s): min=77824, max=110592, per=9.86%, avg=106807.10, stdev=6964.15, samples=20 00:16:50.838 iops : min= 304, max= 432, avg=417.20, stdev=27.20, samples=20 00:16:50.838 lat (msec) : 100=0.09%, 250=99.32%, 500=0.59% 00:16:50.838 cpu : usr=0.63%, sys=0.91%, ctx=5775, majf=0, minf=1 00:16:50.838 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:16:50.838 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:50.838 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:16:50.838 issued rwts: total=0,4236,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:50.838 latency : target=0, window=0, percentile=100.00%, depth=64 00:16:50.838 00:16:50.838 Run status group 0 (all jobs): 00:16:50.838 WRITE: bw=1058MiB/s (1109MB/s), 51.7MiB/s-269MiB/s (54.2MB/s-283MB/s), io=10.5GiB (11.3GB), run=10053-10188msec 00:16:50.838 00:16:50.838 Disk stats (read/write): 00:16:50.838 nvme0n1: ios=50/4089, merge=0/0, ticks=43/1204743, in_queue=1204786, util=97.91% 00:16:50.838 nvme10n1: ios=49/7216, merge=0/0, ticks=50/1210115, in_queue=1210165, util=97.90% 00:16:50.838 nvme1n1: ios=37/4217, merge=0/0, ticks=69/1203574, in_queue=1203643, util=98.13% 00:16:50.838 nvme2n1: ios=24/8297, merge=0/0, ticks=21/1213192, in_queue=1213213, util=98.02% 00:16:50.838 nvme3n1: ios=0/21517, merge=0/0, ticks=0/1216739, in_queue=1216739, util=98.00% 00:16:50.838 nvme4n1: ios=0/8374, merge=0/0, ticks=0/1212802, in_queue=1212802, util=98.25% 00:16:50.838 nvme5n1: ios=0/4135, merge=0/0, ticks=0/1201887, in_queue=1201887, util=98.20% 00:16:50.838 nvme6n1: ios=0/7215, merge=0/0, ticks=0/1210376, in_queue=1210376, util=98.34% 00:16:50.838 nvme7n1: ios=0/7170, merge=0/0, ticks=0/1210231, in_queue=1210231, util=98.53% 00:16:50.838 nvme8n1: ios=0/4143, merge=0/0, ticks=0/1204463, in_queue=1204463, util=98.81% 00:16:50.838 nvme9n1: ios=0/8340, merge=0/0, ticks=0/1213168, in_queue=1213168, util=98.92% 00:16:50.838 06:09:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@36 -- # sync 00:16:50.838 06:09:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # seq 1 11 00:16:50.838 06:09:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:50.838 06:09:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:50.838 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:50.838 06:09:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:16:50.838 06:09:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:16:50.838 06:09:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK1 00:16:50.838 06:09:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:16:50.838 06:09:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK1 00:16:50.838 06:09:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:16:50.838 06:09:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:16:50.838 06:09:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:50.838 06:09:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.838 06:09:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:50.838 06:09:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.839 06:09:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:50.839 06:09:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:16:50.839 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:16:50.839 06:09:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:16:50.839 06:09:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:16:50.839 06:09:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:16:50.839 06:09:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK2 00:16:50.839 06:09:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:16:50.839 06:09:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK2 00:16:50.839 06:09:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:16:50.839 06:09:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:16:50.839 06:09:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.839 06:09:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:50.839 06:09:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.839 06:09:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:50.839 06:09:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:16:50.839 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:16:50.839 06:09:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:16:50.839 06:09:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:16:50.839 06:09:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:16:50.839 06:09:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK3 00:16:50.839 06:09:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK3 00:16:50.839 06:09:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:16:50.839 06:09:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:16:50.839 06:09:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:16:50.839 06:09:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.839 06:09:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:50.839 06:09:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.839 06:09:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:50.839 06:09:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:16:50.839 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:16:50.839 06:09:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:16:50.839 06:09:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:16:50.839 06:09:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:16:50.839 06:09:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK4 00:16:50.839 06:09:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:16:50.839 06:09:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK4 00:16:50.839 06:09:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:16:50.839 06:09:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:16:50.839 06:09:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.839 06:09:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:50.839 06:09:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.839 06:09:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:50.839 06:09:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:16:50.839 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:16:50.839 06:09:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:16:50.839 06:09:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:16:50.839 06:09:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:16:50.839 06:09:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK5 00:16:50.839 06:09:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:16:50.839 06:09:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK5 00:16:50.839 06:09:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:16:50.839 06:09:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:16:50.839 06:09:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.839 06:09:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:50.839 06:09:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.839 06:09:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:50.839 06:09:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:16:50.839 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:16:50.839 06:09:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:16:50.839 06:09:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:16:50.839 06:09:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK6 00:16:50.839 06:09:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:16:50.839 06:09:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:16:50.839 06:09:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK6 00:16:50.839 06:09:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:16:50.839 06:09:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:16:50.839 06:09:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.839 06:09:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:50.839 06:09:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.839 06:09:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:50.839 06:09:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:16:50.839 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:16:50.839 06:09:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:16:50.839 06:09:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:16:50.839 06:09:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK7 00:16:50.839 06:09:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:16:50.839 06:09:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:16:50.839 06:09:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK7 00:16:50.839 06:09:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:16:50.840 06:09:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:16:50.840 06:09:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.840 06:09:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:50.840 06:09:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.840 06:09:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:50.840 06:09:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:16:50.840 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:16:50.840 06:09:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:16:50.840 06:09:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:16:50.840 06:09:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:16:50.840 06:09:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK8 00:16:50.840 06:09:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:16:50.840 06:09:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK8 00:16:50.840 06:09:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:16:50.840 06:09:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:16:50.840 06:09:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.840 06:09:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:50.840 06:09:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.840 06:09:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:50.840 06:09:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:16:50.840 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:16:50.840 06:09:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:16:50.840 06:09:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:16:50.840 06:09:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:16:50.840 06:09:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK9 00:16:50.840 06:09:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:16:50.840 06:09:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK9 00:16:50.840 06:09:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:16:50.840 06:09:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:16:50.840 06:09:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.840 06:09:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:50.840 06:09:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.840 06:09:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:50.840 06:09:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:16:50.840 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:16:50.840 06:09:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:16:50.840 06:09:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:16:50.840 06:09:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:16:50.840 06:09:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK10 00:16:50.840 06:09:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:16:50.840 06:09:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK10 00:16:50.840 06:09:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:16:50.840 06:09:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:16:50.840 06:09:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.840 06:09:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:50.840 06:09:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.840 06:09:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:16:50.840 06:09:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:16:50.840 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:16:50.840 06:09:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:16:50.840 06:09:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:16:50.840 06:09:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:16:50.840 06:09:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK11 00:16:50.840 06:09:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:16:50.840 06:09:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK11 00:16:50.840 06:09:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:16:50.840 06:09:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:16:50.840 06:09:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.840 06:09:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:50.840 06:09:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.840 06:09:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:16:50.840 06:09:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:16:50.840 06:09:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@47 -- # nvmftestfini 00:16:50.840 06:09:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@512 -- # nvmfcleanup 00:16:50.840 06:09:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@121 -- # sync 00:16:50.840 06:09:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:50.840 06:09:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@124 -- # set +e 00:16:50.840 06:09:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:50.840 06:09:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:50.840 rmmod nvme_tcp 00:16:51.100 rmmod nvme_fabrics 00:16:51.100 rmmod nvme_keyring 00:16:51.100 06:09:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:51.100 06:09:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@128 -- # set -e 00:16:51.100 06:09:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@129 -- # return 0 00:16:51.100 06:09:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@513 -- # '[' -n 85704 ']' 00:16:51.100 06:09:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@514 -- # killprocess 85704 00:16:51.100 06:09:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@950 -- # '[' -z 85704 ']' 00:16:51.100 06:09:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@954 -- # kill -0 85704 00:16:51.100 06:09:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@955 -- # uname 00:16:51.100 06:09:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:51.100 06:09:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85704 00:16:51.100 06:09:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:51.100 06:09:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:51.100 killing process with pid 85704 00:16:51.100 06:09:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85704' 00:16:51.100 06:09:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@969 -- # kill 85704 00:16:51.100 06:09:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@974 -- # wait 85704 00:16:51.359 06:09:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:16:51.359 06:09:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:16:51.359 06:09:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:16:51.359 06:09:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@297 -- # iptr 00:16:51.359 06:09:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@787 -- # iptables-save 00:16:51.359 06:09:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:16:51.359 06:09:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@787 -- # iptables-restore 00:16:51.359 06:09:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:51.359 06:09:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:16:51.359 06:09:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:16:51.359 06:09:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:16:51.359 06:09:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:16:51.359 06:09:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:16:51.359 06:09:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:16:51.359 06:09:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:16:51.359 06:09:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:16:51.359 06:09:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:16:51.359 06:09:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:16:51.359 06:09:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:16:51.618 06:09:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:16:51.618 06:09:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:51.618 06:09:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:51.618 06:09:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@246 -- # remove_spdk_ns 00:16:51.618 06:09:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:51.618 06:09:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:51.618 06:09:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:51.618 06:09:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@300 -- # return 0 00:16:51.618 00:16:51.618 real 0m48.509s 00:16:51.618 user 2m44.162s 00:16:51.618 sys 0m26.681s 00:16:51.618 06:09:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:51.618 ************************************ 00:16:51.618 END TEST nvmf_multiconnection 00:16:51.618 ************************************ 00:16:51.618 06:09:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:16:51.618 06:09:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@50 -- # run_test nvmf_initiator_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:16:51.618 06:09:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:16:51.618 06:09:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:51.618 06:09:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:51.619 ************************************ 00:16:51.619 START TEST nvmf_initiator_timeout 00:16:51.619 ************************************ 00:16:51.619 06:09:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:16:51.619 * Looking for test storage... 00:16:51.619 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:51.619 06:09:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:16:51.619 06:09:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1681 -- # lcov --version 00:16:51.619 06:09:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:16:51.879 06:09:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:16:51.879 06:09:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:51.879 06:09:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:51.879 06:09:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:51.879 06:09:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@336 -- # IFS=.-: 00:16:51.879 06:09:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@336 -- # read -ra ver1 00:16:51.879 06:09:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@337 -- # IFS=.-: 00:16:51.879 06:09:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@337 -- # read -ra ver2 00:16:51.879 06:09:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@338 -- # local 'op=<' 00:16:51.879 06:09:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@340 -- # ver1_l=2 00:16:51.879 06:09:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@341 -- # ver2_l=1 00:16:51.879 06:09:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:51.879 06:09:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@344 -- # case "$op" in 00:16:51.879 06:09:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@345 -- # : 1 00:16:51.879 06:09:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:51.879 06:09:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:51.879 06:09:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@365 -- # decimal 1 00:16:51.879 06:09:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@353 -- # local d=1 00:16:51.879 06:09:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:51.879 06:09:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@355 -- # echo 1 00:16:51.879 06:09:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@365 -- # ver1[v]=1 00:16:51.879 06:09:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@366 -- # decimal 2 00:16:51.879 06:09:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@353 -- # local d=2 00:16:51.879 06:09:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:51.879 06:09:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@355 -- # echo 2 00:16:51.879 06:09:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@366 -- # ver2[v]=2 00:16:51.879 06:09:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:51.879 06:09:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:51.879 06:09:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@368 -- # return 0 00:16:51.879 06:09:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:51.879 06:09:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:16:51.879 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:51.879 --rc genhtml_branch_coverage=1 00:16:51.879 --rc genhtml_function_coverage=1 00:16:51.879 --rc genhtml_legend=1 00:16:51.879 --rc geninfo_all_blocks=1 00:16:51.879 --rc geninfo_unexecuted_blocks=1 00:16:51.879 00:16:51.879 ' 00:16:51.879 06:09:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:16:51.879 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:51.879 --rc genhtml_branch_coverage=1 00:16:51.879 --rc genhtml_function_coverage=1 00:16:51.879 --rc genhtml_legend=1 00:16:51.879 --rc geninfo_all_blocks=1 00:16:51.879 --rc geninfo_unexecuted_blocks=1 00:16:51.879 00:16:51.879 ' 00:16:51.879 06:09:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:16:51.879 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:51.879 --rc genhtml_branch_coverage=1 00:16:51.879 --rc genhtml_function_coverage=1 00:16:51.879 --rc genhtml_legend=1 00:16:51.879 --rc geninfo_all_blocks=1 00:16:51.879 --rc geninfo_unexecuted_blocks=1 00:16:51.879 00:16:51.879 ' 00:16:51.879 06:09:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:16:51.879 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:51.879 --rc genhtml_branch_coverage=1 00:16:51.879 --rc genhtml_function_coverage=1 00:16:51.879 --rc genhtml_legend=1 00:16:51.879 --rc geninfo_all_blocks=1 00:16:51.879 --rc geninfo_unexecuted_blocks=1 00:16:51.879 00:16:51.879 ' 00:16:51.879 06:09:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:51.879 06:09:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # uname -s 00:16:51.879 06:09:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:51.879 06:09:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:51.879 06:09:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:51.879 06:09:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:51.879 06:09:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:51.879 06:09:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:51.879 06:09:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:51.879 06:09:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:51.879 06:09:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:51.879 06:09:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:51.879 06:09:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 00:16:51.879 06:09:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=a979a798-a221-4879-b3c4-5aaa753fde06 00:16:51.879 06:09:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:51.879 06:09:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:51.879 06:09:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:51.879 06:09:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:51.879 06:09:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:51.879 06:09:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@15 -- # shopt -s extglob 00:16:51.879 06:09:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:51.879 06:09:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:51.879 06:09:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:51.879 06:09:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:51.879 06:09:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:51.879 06:09:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:51.879 06:09:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@5 -- # export PATH 00:16:51.879 06:09:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:51.880 06:09:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@51 -- # : 0 00:16:51.880 06:09:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:51.880 06:09:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:51.880 06:09:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:51.880 06:09:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:51.880 06:09:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:51.880 06:09:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:51.880 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:51.880 06:09:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:51.880 06:09:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:51.880 06:09:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:51.880 06:09:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:51.880 06:09:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:51.880 06:09:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:16:51.880 06:09:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:16:51.880 06:09:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:51.880 06:09:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@472 -- # prepare_net_devs 00:16:51.880 06:09:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@434 -- # local -g is_hw=no 00:16:51.880 06:09:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@436 -- # remove_spdk_ns 00:16:51.880 06:09:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:51.880 06:09:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:51.880 06:09:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:51.880 06:09:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:16:51.880 06:09:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:16:51.880 06:09:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:16:51.880 06:09:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:16:51.880 06:09:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:16:51.880 06:09:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@456 -- # nvmf_veth_init 00:16:51.880 06:09:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:51.880 06:09:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:16:51.880 06:09:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:16:51.880 06:09:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:16:51.880 06:09:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:51.880 06:09:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:16:51.880 06:09:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:51.880 06:09:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:16:51.880 06:09:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:51.880 06:09:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:16:51.880 06:09:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:51.880 06:09:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:51.880 06:09:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:51.880 06:09:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:51.880 06:09:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:51.880 06:09:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:51.880 06:09:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:16:51.880 Cannot find device "nvmf_init_br" 00:16:51.880 06:09:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@162 -- # true 00:16:51.880 06:09:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:16:51.880 Cannot find device "nvmf_init_br2" 00:16:51.880 06:09:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@163 -- # true 00:16:51.880 06:09:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:16:51.880 Cannot find device "nvmf_tgt_br" 00:16:51.880 06:09:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@164 -- # true 00:16:51.880 06:09:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:16:51.880 Cannot find device "nvmf_tgt_br2" 00:16:51.880 06:09:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@165 -- # true 00:16:51.880 06:09:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:16:51.880 Cannot find device "nvmf_init_br" 00:16:51.880 06:09:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@166 -- # true 00:16:51.880 06:09:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:16:51.880 Cannot find device "nvmf_init_br2" 00:16:51.880 06:09:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@167 -- # true 00:16:51.880 06:09:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:16:51.880 Cannot find device "nvmf_tgt_br" 00:16:51.880 06:09:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@168 -- # true 00:16:51.880 06:09:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:16:51.880 Cannot find device "nvmf_tgt_br2" 00:16:51.880 06:09:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@169 -- # true 00:16:51.880 06:09:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:16:51.880 Cannot find device "nvmf_br" 00:16:51.880 06:09:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@170 -- # true 00:16:51.880 06:09:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:16:51.880 Cannot find device "nvmf_init_if" 00:16:51.880 06:09:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@171 -- # true 00:16:51.880 06:09:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:16:51.880 Cannot find device "nvmf_init_if2" 00:16:51.880 06:09:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@172 -- # true 00:16:51.880 06:09:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:51.880 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:51.880 06:09:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@173 -- # true 00:16:51.880 06:09:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:51.880 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:51.880 06:09:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@174 -- # true 00:16:51.880 06:09:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:16:51.880 06:09:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:51.880 06:09:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:16:52.140 06:09:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:52.140 06:09:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:52.140 06:09:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:52.140 06:09:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:52.140 06:09:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:52.140 06:09:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:16:52.140 06:09:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:16:52.140 06:09:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:16:52.140 06:09:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:16:52.140 06:09:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:16:52.140 06:09:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:16:52.140 06:09:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:16:52.140 06:09:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:16:52.140 06:09:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:16:52.140 06:09:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:52.140 06:09:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:52.140 06:09:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:52.140 06:09:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:16:52.140 06:09:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:16:52.140 06:09:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:16:52.140 06:09:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:16:52.140 06:09:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:52.140 06:09:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:52.140 06:09:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:52.140 06:09:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:16:52.140 06:09:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:16:52.140 06:09:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:16:52.140 06:09:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:52.140 06:09:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:16:52.140 06:09:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:16:52.140 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:52.140 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.078 ms 00:16:52.140 00:16:52.140 --- 10.0.0.3 ping statistics --- 00:16:52.140 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:52.140 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:16:52.140 06:09:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:16:52.140 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:16:52.141 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.053 ms 00:16:52.141 00:16:52.141 --- 10.0.0.4 ping statistics --- 00:16:52.141 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:52.141 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:16:52.141 06:09:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:52.141 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:52.141 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:16:52.141 00:16:52.141 --- 10.0.0.1 ping statistics --- 00:16:52.141 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:52.141 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:16:52.141 06:09:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:16:52.141 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:52.141 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.048 ms 00:16:52.141 00:16:52.141 --- 10.0.0.2 ping statistics --- 00:16:52.141 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:52.141 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:16:52.141 06:09:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:52.141 06:09:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@457 -- # return 0 00:16:52.141 06:09:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:16:52.141 06:09:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:52.141 06:09:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:16:52.141 06:09:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:16:52.141 06:09:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:52.141 06:09:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:16:52.141 06:09:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:16:52.141 06:09:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:16:52.141 06:09:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:16:52.141 06:09:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:52.141 06:09:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:16:52.141 06:09:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@505 -- # nvmfpid=86814 00:16:52.141 06:09:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@506 -- # waitforlisten 86814 00:16:52.141 06:09:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:52.141 06:09:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@831 -- # '[' -z 86814 ']' 00:16:52.141 06:09:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:52.141 06:09:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:52.141 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:52.141 06:09:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:52.141 06:09:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:52.141 06:09:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:16:52.400 [2024-10-01 06:09:17.776779] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:16:52.400 [2024-10-01 06:09:17.776878] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:52.400 [2024-10-01 06:09:17.918992] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:52.400 [2024-10-01 06:09:17.953611] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:52.400 [2024-10-01 06:09:17.953923] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:52.400 [2024-10-01 06:09:17.954070] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:52.400 [2024-10-01 06:09:17.954179] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:52.400 [2024-10-01 06:09:17.954211] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:52.400 [2024-10-01 06:09:17.954682] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:16:52.400 [2024-10-01 06:09:17.954821] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:16:52.400 [2024-10-01 06:09:17.954985] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:16:52.400 [2024-10-01 06:09:17.954989] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:52.400 [2024-10-01 06:09:17.985816] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:52.661 06:09:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:52.661 06:09:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@864 -- # return 0 00:16:52.661 06:09:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:16:52.661 06:09:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:52.661 06:09:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:16:52.661 06:09:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:52.661 06:09:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:16:52.661 06:09:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:52.661 06:09:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.661 06:09:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:16:52.661 Malloc0 00:16:52.661 06:09:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.661 06:09:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:16:52.661 06:09:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.661 06:09:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:16:52.661 Delay0 00:16:52.661 06:09:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.661 06:09:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:52.661 06:09:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.661 06:09:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:16:52.661 [2024-10-01 06:09:18.109933] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:52.661 06:09:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.661 06:09:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:16:52.661 06:09:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.661 06:09:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:16:52.661 06:09:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.661 06:09:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:52.661 06:09:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.661 06:09:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:16:52.661 06:09:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.661 06:09:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:16:52.661 06:09:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.661 06:09:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:16:52.661 [2024-10-01 06:09:18.146116] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:52.661 06:09:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.661 06:09:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 --hostid=a979a798-a221-4879-b3c4-5aaa753fde06 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:16:52.920 06:09:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:16:52.920 06:09:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1198 -- # local i=0 00:16:52.920 06:09:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:16:52.920 06:09:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:16:52.920 06:09:18 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1205 -- # sleep 2 00:16:54.825 06:09:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:16:54.825 06:09:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:16:54.825 06:09:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:16:54.825 06:09:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:16:54.825 06:09:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:16:54.825 06:09:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1208 -- # return 0 00:16:54.825 06:09:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@35 -- # fio_pid=86865 00:16:54.825 06:09:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:16:54.825 06:09:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@37 -- # sleep 3 00:16:54.825 [global] 00:16:54.825 thread=1 00:16:54.825 invalidate=1 00:16:54.825 rw=write 00:16:54.825 time_based=1 00:16:54.825 runtime=60 00:16:54.825 ioengine=libaio 00:16:54.825 direct=1 00:16:54.825 bs=4096 00:16:54.825 iodepth=1 00:16:54.825 norandommap=0 00:16:54.825 numjobs=1 00:16:54.825 00:16:54.825 verify_dump=1 00:16:54.825 verify_backlog=512 00:16:54.825 verify_state_save=0 00:16:54.825 do_verify=1 00:16:54.825 verify=crc32c-intel 00:16:54.825 [job0] 00:16:54.825 filename=/dev/nvme0n1 00:16:54.825 Could not set queue depth (nvme0n1) 00:16:55.085 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:55.085 fio-3.35 00:16:55.085 Starting 1 thread 00:16:58.372 06:09:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:16:58.372 06:09:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.372 06:09:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:16:58.372 true 00:16:58.372 06:09:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.372 06:09:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:16:58.372 06:09:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.372 06:09:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:16:58.372 true 00:16:58.372 06:09:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.372 06:09:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:16:58.372 06:09:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.372 06:09:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:16:58.372 true 00:16:58.372 06:09:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.372 06:09:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:16:58.372 06:09:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.372 06:09:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:16:58.372 true 00:16:58.372 06:09:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.372 06:09:23 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@45 -- # sleep 3 00:17:00.906 06:09:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:17:00.906 06:09:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.906 06:09:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:17:00.906 true 00:17:00.906 06:09:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.906 06:09:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:17:00.906 06:09:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.906 06:09:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:17:00.906 true 00:17:00.906 06:09:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.906 06:09:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:17:00.906 06:09:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.906 06:09:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:17:00.906 true 00:17:00.906 06:09:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.906 06:09:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:17:00.906 06:09:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.906 06:09:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:17:00.906 true 00:17:00.906 06:09:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.906 06:09:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@53 -- # fio_status=0 00:17:00.906 06:09:26 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@54 -- # wait 86865 00:17:57.135 00:17:57.135 job0: (groupid=0, jobs=1): err= 0: pid=86886: Tue Oct 1 06:10:20 2024 00:17:57.135 read: IOPS=836, BW=3345KiB/s (3425kB/s)(196MiB/60000msec) 00:17:57.135 slat (usec): min=10, max=13734, avg=14.70, stdev=70.58 00:17:57.135 clat (usec): min=150, max=40741k, avg=1008.44, stdev=181879.74 00:17:57.135 lat (usec): min=162, max=40741k, avg=1023.14, stdev=181879.74 00:17:57.135 clat percentiles (usec): 00:17:57.135 | 1.00th=[ 161], 5.00th=[ 167], 10.00th=[ 172], 20.00th=[ 178], 00:17:57.135 | 30.00th=[ 182], 40.00th=[ 188], 50.00th=[ 192], 60.00th=[ 198], 00:17:57.135 | 70.00th=[ 204], 80.00th=[ 215], 90.00th=[ 227], 95.00th=[ 241], 00:17:57.135 | 99.00th=[ 273], 99.50th=[ 289], 99.90th=[ 351], 99.95th=[ 515], 00:17:57.135 | 99.99th=[ 1139] 00:17:57.135 write: IOPS=836, BW=3345KiB/s (3425kB/s)(196MiB/60000msec); 0 zone resets 00:17:57.135 slat (usec): min=13, max=595, avg=19.67, stdev= 6.85 00:17:57.135 clat (usec): min=15, max=584, avg=149.83, stdev=22.42 00:17:57.135 lat (usec): min=129, max=762, avg=169.50, stdev=24.53 00:17:57.135 clat percentiles (usec): 00:17:57.135 | 1.00th=[ 119], 5.00th=[ 123], 10.00th=[ 126], 20.00th=[ 131], 00:17:57.135 | 30.00th=[ 137], 40.00th=[ 143], 50.00th=[ 147], 60.00th=[ 153], 00:17:57.135 | 70.00th=[ 159], 80.00th=[ 165], 90.00th=[ 178], 95.00th=[ 190], 00:17:57.135 | 99.00th=[ 217], 99.50th=[ 231], 99.90th=[ 269], 99.95th=[ 306], 00:17:57.135 | 99.99th=[ 529] 00:17:57.135 bw ( KiB/s): min= 4096, max=12288, per=100.00%, avg=10347.79, stdev=1697.48, samples=38 00:17:57.135 iops : min= 1024, max= 3072, avg=2586.95, stdev=424.37, samples=38 00:17:57.135 lat (usec) : 20=0.01%, 250=98.41%, 500=1.55%, 750=0.02%, 1000=0.01% 00:17:57.135 lat (msec) : 2=0.01%, 4=0.01%, 10=0.01%, >=2000=0.01% 00:17:57.135 cpu : usr=0.61%, sys=2.23%, ctx=100369, majf=0, minf=5 00:17:57.135 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:57.135 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:57.135 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:57.135 issued rwts: total=50175,50176,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:57.135 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:57.135 00:17:57.135 Run status group 0 (all jobs): 00:17:57.135 READ: bw=3345KiB/s (3425kB/s), 3345KiB/s-3345KiB/s (3425kB/s-3425kB/s), io=196MiB (206MB), run=60000-60000msec 00:17:57.135 WRITE: bw=3345KiB/s (3425kB/s), 3345KiB/s-3345KiB/s (3425kB/s-3425kB/s), io=196MiB (206MB), run=60000-60000msec 00:17:57.135 00:17:57.135 Disk stats (read/write): 00:17:57.135 nvme0n1: ios=50059/50176, merge=0/0, ticks=10140/7943, in_queue=18083, util=99.75% 00:17:57.135 06:10:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:57.135 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:57.135 06:10:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:57.135 06:10:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1219 -- # local i=0 00:17:57.135 06:10:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:17:57.135 06:10:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:57.135 06:10:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:57.135 06:10:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:17:57.135 06:10:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1231 -- # return 0 00:17:57.135 06:10:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:17:57.135 06:10:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:17:57.135 nvmf hotplug test: fio successful as expected 00:17:57.135 06:10:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:57.135 06:10:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.135 06:10:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:17:57.135 06:10:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.135 06:10:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:17:57.135 06:10:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:17:57.135 06:10:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:17:57.135 06:10:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@512 -- # nvmfcleanup 00:17:57.135 06:10:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@121 -- # sync 00:17:57.135 06:10:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:57.135 06:10:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@124 -- # set +e 00:17:57.135 06:10:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:57.135 06:10:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:57.135 rmmod nvme_tcp 00:17:57.135 rmmod nvme_fabrics 00:17:57.135 rmmod nvme_keyring 00:17:57.135 06:10:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:57.135 06:10:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@128 -- # set -e 00:17:57.135 06:10:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@129 -- # return 0 00:17:57.135 06:10:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@513 -- # '[' -n 86814 ']' 00:17:57.135 06:10:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@514 -- # killprocess 86814 00:17:57.135 06:10:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@950 -- # '[' -z 86814 ']' 00:17:57.135 06:10:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@954 -- # kill -0 86814 00:17:57.135 06:10:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@955 -- # uname 00:17:57.135 06:10:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:57.135 06:10:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 86814 00:17:57.135 killing process with pid 86814 00:17:57.135 06:10:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:57.135 06:10:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:57.135 06:10:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@968 -- # echo 'killing process with pid 86814' 00:17:57.135 06:10:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@969 -- # kill 86814 00:17:57.135 06:10:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@974 -- # wait 86814 00:17:57.135 06:10:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:17:57.135 06:10:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:17:57.135 06:10:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:17:57.135 06:10:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # iptr 00:17:57.135 06:10:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@787 -- # iptables-save 00:17:57.135 06:10:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:17:57.135 06:10:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@787 -- # iptables-restore 00:17:57.135 06:10:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:57.135 06:10:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:17:57.135 06:10:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:17:57.135 06:10:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:17:57.135 06:10:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:17:57.135 06:10:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:17:57.135 06:10:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:17:57.135 06:10:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:17:57.135 06:10:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:17:57.135 06:10:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:17:57.135 06:10:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:17:57.135 06:10:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:17:57.135 06:10:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:17:57.136 06:10:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:57.136 06:10:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:57.136 06:10:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@246 -- # remove_spdk_ns 00:17:57.136 06:10:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:57.136 06:10:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:57.136 06:10:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:57.136 06:10:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@300 -- # return 0 00:17:57.136 00:17:57.136 real 1m4.098s 00:17:57.136 user 3m50.294s 00:17:57.136 sys 0m22.117s 00:17:57.136 06:10:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:57.136 ************************************ 00:17:57.136 END TEST nvmf_initiator_timeout 00:17:57.136 ************************************ 00:17:57.136 06:10:21 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:17:57.136 06:10:21 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ virt == phy ]] 00:17:57.136 06:10:21 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:17:57.136 00:17:57.136 real 6m45.128s 00:17:57.136 user 16m50.737s 00:17:57.136 sys 1m53.544s 00:17:57.136 06:10:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:57.136 ************************************ 00:17:57.136 END TEST nvmf_target_extra 00:17:57.136 ************************************ 00:17:57.136 06:10:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:57.136 06:10:21 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:17:57.136 06:10:21 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:17:57.136 06:10:21 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:57.136 06:10:21 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:57.136 ************************************ 00:17:57.136 START TEST nvmf_host 00:17:57.136 ************************************ 00:17:57.136 06:10:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:17:57.136 * Looking for test storage... 00:17:57.136 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:17:57.136 06:10:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:17:57.136 06:10:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1681 -- # lcov --version 00:17:57.136 06:10:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:17:57.136 06:10:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:17:57.136 06:10:21 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:57.136 06:10:21 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:57.136 06:10:21 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:57.136 06:10:21 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:17:57.136 06:10:21 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:17:57.136 06:10:21 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:17:57.136 06:10:21 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:17:57.136 06:10:21 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:17:57.136 06:10:21 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:17:57.136 06:10:21 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:17:57.136 06:10:21 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:57.136 06:10:21 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:17:57.136 06:10:21 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:17:57.136 06:10:21 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:57.136 06:10:21 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:57.136 06:10:21 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:17:57.136 06:10:21 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:17:57.136 06:10:21 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:57.136 06:10:21 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:17:57.136 06:10:21 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:17:57.136 06:10:21 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:17:57.136 06:10:21 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:17:57.136 06:10:21 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:57.136 06:10:21 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:17:57.136 06:10:21 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:17:57.136 06:10:21 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:57.136 06:10:21 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:57.136 06:10:21 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:17:57.136 06:10:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:57.136 06:10:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:17:57.136 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:57.136 --rc genhtml_branch_coverage=1 00:17:57.136 --rc genhtml_function_coverage=1 00:17:57.136 --rc genhtml_legend=1 00:17:57.136 --rc geninfo_all_blocks=1 00:17:57.136 --rc geninfo_unexecuted_blocks=1 00:17:57.136 00:17:57.136 ' 00:17:57.136 06:10:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:17:57.136 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:57.136 --rc genhtml_branch_coverage=1 00:17:57.136 --rc genhtml_function_coverage=1 00:17:57.136 --rc genhtml_legend=1 00:17:57.136 --rc geninfo_all_blocks=1 00:17:57.136 --rc geninfo_unexecuted_blocks=1 00:17:57.136 00:17:57.136 ' 00:17:57.136 06:10:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:17:57.136 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:57.136 --rc genhtml_branch_coverage=1 00:17:57.136 --rc genhtml_function_coverage=1 00:17:57.136 --rc genhtml_legend=1 00:17:57.136 --rc geninfo_all_blocks=1 00:17:57.136 --rc geninfo_unexecuted_blocks=1 00:17:57.136 00:17:57.136 ' 00:17:57.136 06:10:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:17:57.136 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:57.136 --rc genhtml_branch_coverage=1 00:17:57.136 --rc genhtml_function_coverage=1 00:17:57.136 --rc genhtml_legend=1 00:17:57.136 --rc geninfo_all_blocks=1 00:17:57.136 --rc geninfo_unexecuted_blocks=1 00:17:57.136 00:17:57.136 ' 00:17:57.136 06:10:21 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:57.136 06:10:21 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:17:57.136 06:10:21 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:57.136 06:10:21 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:57.136 06:10:21 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:57.136 06:10:21 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:57.136 06:10:21 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:57.136 06:10:21 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:57.136 06:10:21 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:57.136 06:10:21 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:57.136 06:10:21 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:57.136 06:10:21 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:57.136 06:10:21 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 00:17:57.136 06:10:21 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=a979a798-a221-4879-b3c4-5aaa753fde06 00:17:57.136 06:10:21 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:57.136 06:10:21 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:57.136 06:10:21 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:57.136 06:10:21 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:57.136 06:10:21 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:57.136 06:10:21 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:17:57.136 06:10:21 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:57.136 06:10:21 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:57.136 06:10:21 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:57.136 06:10:21 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:57.136 06:10:21 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:57.136 06:10:21 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:57.136 06:10:21 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:17:57.136 06:10:21 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:57.137 06:10:21 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:17:57.137 06:10:21 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:57.137 06:10:21 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:57.137 06:10:21 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:57.137 06:10:21 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:57.137 06:10:21 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:57.137 06:10:21 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:57.137 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:57.137 06:10:21 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:57.137 06:10:21 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:57.137 06:10:21 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:57.137 06:10:21 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:17:57.137 06:10:21 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:17:57.137 06:10:21 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 1 -eq 0 ]] 00:17:57.137 06:10:21 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:17:57.137 06:10:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:17:57.137 06:10:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:57.137 06:10:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:17:57.137 ************************************ 00:17:57.137 START TEST nvmf_identify 00:17:57.137 ************************************ 00:17:57.137 06:10:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:17:57.137 * Looking for test storage... 00:17:57.137 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:57.137 06:10:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:17:57.137 06:10:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1681 -- # lcov --version 00:17:57.137 06:10:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:17:57.137 06:10:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:17:57.137 06:10:21 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:57.137 06:10:21 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:57.137 06:10:21 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:57.137 06:10:21 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:17:57.137 06:10:21 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:17:57.137 06:10:21 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:17:57.137 06:10:21 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:17:57.137 06:10:21 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:17:57.137 06:10:21 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:17:57.137 06:10:21 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:17:57.137 06:10:21 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:57.137 06:10:21 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:17:57.137 06:10:21 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:17:57.137 06:10:21 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:57.137 06:10:21 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:57.137 06:10:21 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:17:57.137 06:10:21 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:17:57.137 06:10:21 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:57.137 06:10:21 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:17:57.137 06:10:21 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:17:57.137 06:10:21 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:17:57.137 06:10:21 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:17:57.137 06:10:21 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:57.137 06:10:21 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:17:57.137 06:10:21 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:17:57.137 06:10:21 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:57.137 06:10:21 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:57.137 06:10:21 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:17:57.137 06:10:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:57.137 06:10:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:17:57.137 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:57.137 --rc genhtml_branch_coverage=1 00:17:57.137 --rc genhtml_function_coverage=1 00:17:57.137 --rc genhtml_legend=1 00:17:57.137 --rc geninfo_all_blocks=1 00:17:57.137 --rc geninfo_unexecuted_blocks=1 00:17:57.137 00:17:57.137 ' 00:17:57.137 06:10:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:17:57.137 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:57.137 --rc genhtml_branch_coverage=1 00:17:57.137 --rc genhtml_function_coverage=1 00:17:57.137 --rc genhtml_legend=1 00:17:57.137 --rc geninfo_all_blocks=1 00:17:57.137 --rc geninfo_unexecuted_blocks=1 00:17:57.137 00:17:57.137 ' 00:17:57.137 06:10:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:17:57.137 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:57.137 --rc genhtml_branch_coverage=1 00:17:57.137 --rc genhtml_function_coverage=1 00:17:57.137 --rc genhtml_legend=1 00:17:57.137 --rc geninfo_all_blocks=1 00:17:57.137 --rc geninfo_unexecuted_blocks=1 00:17:57.137 00:17:57.137 ' 00:17:57.137 06:10:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:17:57.137 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:57.137 --rc genhtml_branch_coverage=1 00:17:57.137 --rc genhtml_function_coverage=1 00:17:57.137 --rc genhtml_legend=1 00:17:57.137 --rc geninfo_all_blocks=1 00:17:57.137 --rc geninfo_unexecuted_blocks=1 00:17:57.137 00:17:57.137 ' 00:17:57.137 06:10:21 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:57.137 06:10:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:17:57.137 06:10:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:57.137 06:10:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:57.137 06:10:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:57.137 06:10:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:57.137 06:10:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:57.137 06:10:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:57.137 06:10:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:57.137 06:10:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:57.137 06:10:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:57.137 06:10:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:57.137 06:10:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 00:17:57.137 06:10:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=a979a798-a221-4879-b3c4-5aaa753fde06 00:17:57.137 06:10:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:57.137 06:10:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:57.137 06:10:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:57.137 06:10:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:57.137 06:10:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:57.137 06:10:21 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:17:57.137 06:10:21 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:57.137 06:10:21 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:57.137 06:10:21 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:57.138 06:10:21 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:57.138 06:10:21 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:57.138 06:10:21 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:57.138 06:10:21 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:17:57.138 06:10:21 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:57.138 06:10:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:17:57.138 06:10:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:57.138 06:10:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:57.138 06:10:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:57.138 06:10:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:57.138 06:10:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:57.138 06:10:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:57.138 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:57.138 06:10:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:57.138 06:10:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:57.138 06:10:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:57.138 06:10:21 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:57.138 06:10:21 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:57.138 06:10:21 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:17:57.138 06:10:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:17:57.138 06:10:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:57.138 06:10:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@472 -- # prepare_net_devs 00:17:57.138 06:10:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@434 -- # local -g is_hw=no 00:17:57.138 06:10:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@436 -- # remove_spdk_ns 00:17:57.138 06:10:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:57.138 06:10:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:57.138 06:10:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:57.138 06:10:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:17:57.138 06:10:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:17:57.138 06:10:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:17:57.138 06:10:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:17:57.138 06:10:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:17:57.138 06:10:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@456 -- # nvmf_veth_init 00:17:57.138 06:10:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:57.138 06:10:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:17:57.138 06:10:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:17:57.138 06:10:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:17:57.138 06:10:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:57.138 06:10:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:17:57.138 06:10:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:57.138 06:10:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:17:57.138 06:10:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:57.138 06:10:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:17:57.138 06:10:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:57.138 06:10:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:57.138 06:10:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:57.138 06:10:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:57.138 06:10:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:57.138 06:10:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:57.138 06:10:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:17:57.138 Cannot find device "nvmf_init_br" 00:17:57.138 06:10:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@162 -- # true 00:17:57.138 06:10:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:17:57.138 Cannot find device "nvmf_init_br2" 00:17:57.138 06:10:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@163 -- # true 00:17:57.138 06:10:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:17:57.138 Cannot find device "nvmf_tgt_br" 00:17:57.138 06:10:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@164 -- # true 00:17:57.138 06:10:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:17:57.138 Cannot find device "nvmf_tgt_br2" 00:17:57.138 06:10:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@165 -- # true 00:17:57.138 06:10:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:17:57.138 Cannot find device "nvmf_init_br" 00:17:57.138 06:10:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@166 -- # true 00:17:57.138 06:10:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:17:57.138 Cannot find device "nvmf_init_br2" 00:17:57.138 06:10:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@167 -- # true 00:17:57.138 06:10:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:17:57.138 Cannot find device "nvmf_tgt_br" 00:17:57.138 06:10:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@168 -- # true 00:17:57.138 06:10:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:17:57.138 Cannot find device "nvmf_tgt_br2" 00:17:57.138 06:10:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@169 -- # true 00:17:57.138 06:10:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:17:57.138 Cannot find device "nvmf_br" 00:17:57.138 06:10:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@170 -- # true 00:17:57.138 06:10:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:17:57.138 Cannot find device "nvmf_init_if" 00:17:57.138 06:10:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@171 -- # true 00:17:57.138 06:10:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:17:57.138 Cannot find device "nvmf_init_if2" 00:17:57.138 06:10:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@172 -- # true 00:17:57.138 06:10:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:57.138 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:57.138 06:10:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@173 -- # true 00:17:57.138 06:10:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:57.138 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:57.138 06:10:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@174 -- # true 00:17:57.138 06:10:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:17:57.138 06:10:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:57.138 06:10:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:17:57.138 06:10:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:57.138 06:10:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:57.138 06:10:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:57.138 06:10:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:57.138 06:10:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:57.138 06:10:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:17:57.138 06:10:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:17:57.138 06:10:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:17:57.138 06:10:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:17:57.138 06:10:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:17:57.138 06:10:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:17:57.138 06:10:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:17:57.138 06:10:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:17:57.138 06:10:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:17:57.138 06:10:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:57.139 06:10:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:57.139 06:10:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:57.139 06:10:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:17:57.139 06:10:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:17:57.139 06:10:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:17:57.139 06:10:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:17:57.139 06:10:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:57.139 06:10:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:57.139 06:10:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:57.139 06:10:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:17:57.139 06:10:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:17:57.139 06:10:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:17:57.139 06:10:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:57.139 06:10:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:17:57.139 06:10:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:17:57.139 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:57.139 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.054 ms 00:17:57.139 00:17:57.139 --- 10.0.0.3 ping statistics --- 00:17:57.139 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:57.139 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:17:57.139 06:10:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:17:57.139 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:17:57.139 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.040 ms 00:17:57.139 00:17:57.139 --- 10.0.0.4 ping statistics --- 00:17:57.139 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:57.139 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:17:57.139 06:10:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:57.139 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:57.139 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.045 ms 00:17:57.139 00:17:57.139 --- 10.0.0.1 ping statistics --- 00:17:57.139 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:57.139 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:17:57.139 06:10:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:17:57.139 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:57.139 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.090 ms 00:17:57.139 00:17:57.139 --- 10.0.0.2 ping statistics --- 00:17:57.139 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:57.139 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:17:57.139 06:10:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:57.139 06:10:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@457 -- # return 0 00:17:57.139 06:10:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:17:57.139 06:10:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:57.139 06:10:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:17:57.139 06:10:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:17:57.139 06:10:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:57.139 06:10:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:17:57.139 06:10:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:17:57.139 06:10:22 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:17:57.139 06:10:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:57.139 06:10:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:57.139 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:57.139 06:10:22 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=87823 00:17:57.139 06:10:22 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:57.139 06:10:22 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 87823 00:17:57.139 06:10:22 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:57.139 06:10:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@831 -- # '[' -z 87823 ']' 00:17:57.139 06:10:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:57.139 06:10:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:57.139 06:10:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:57.139 06:10:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:57.139 06:10:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:57.139 [2024-10-01 06:10:22.238712] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:17:57.139 [2024-10-01 06:10:22.239095] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:57.139 [2024-10-01 06:10:22.379843] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:57.139 [2024-10-01 06:10:22.421363] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:57.139 [2024-10-01 06:10:22.421424] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:57.139 [2024-10-01 06:10:22.421448] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:57.139 [2024-10-01 06:10:22.421458] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:57.139 [2024-10-01 06:10:22.421467] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:57.139 [2024-10-01 06:10:22.422256] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:17:57.139 [2024-10-01 06:10:22.422356] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:17:57.139 [2024-10-01 06:10:22.422496] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:17:57.139 [2024-10-01 06:10:22.422504] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:57.139 [2024-10-01 06:10:22.455488] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:57.139 06:10:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:57.139 06:10:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # return 0 00:17:57.139 06:10:22 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:57.139 06:10:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.139 06:10:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:57.139 [2024-10-01 06:10:22.520179] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:57.139 06:10:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.139 06:10:22 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:17:57.139 06:10:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:57.139 06:10:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:57.139 06:10:22 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:57.139 06:10:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.139 06:10:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:57.139 Malloc0 00:17:57.139 06:10:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.139 06:10:22 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:57.139 06:10:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.139 06:10:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:57.139 06:10:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.139 06:10:22 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:17:57.139 06:10:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.139 06:10:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:57.139 06:10:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.139 06:10:22 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:17:57.139 06:10:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.139 06:10:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:57.139 [2024-10-01 06:10:22.608388] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:57.139 06:10:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.139 06:10:22 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:17:57.140 06:10:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.140 06:10:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:57.140 06:10:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.140 06:10:22 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:17:57.140 06:10:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.140 06:10:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:57.140 [ 00:17:57.140 { 00:17:57.140 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:17:57.140 "subtype": "Discovery", 00:17:57.140 "listen_addresses": [ 00:17:57.140 { 00:17:57.140 "trtype": "TCP", 00:17:57.140 "adrfam": "IPv4", 00:17:57.140 "traddr": "10.0.0.3", 00:17:57.140 "trsvcid": "4420" 00:17:57.140 } 00:17:57.140 ], 00:17:57.140 "allow_any_host": true, 00:17:57.140 "hosts": [] 00:17:57.140 }, 00:17:57.140 { 00:17:57.140 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:57.140 "subtype": "NVMe", 00:17:57.140 "listen_addresses": [ 00:17:57.140 { 00:17:57.140 "trtype": "TCP", 00:17:57.140 "adrfam": "IPv4", 00:17:57.140 "traddr": "10.0.0.3", 00:17:57.140 "trsvcid": "4420" 00:17:57.140 } 00:17:57.140 ], 00:17:57.140 "allow_any_host": true, 00:17:57.140 "hosts": [], 00:17:57.140 "serial_number": "SPDK00000000000001", 00:17:57.140 "model_number": "SPDK bdev Controller", 00:17:57.140 "max_namespaces": 32, 00:17:57.140 "min_cntlid": 1, 00:17:57.140 "max_cntlid": 65519, 00:17:57.140 "namespaces": [ 00:17:57.140 { 00:17:57.140 "nsid": 1, 00:17:57.140 "bdev_name": "Malloc0", 00:17:57.140 "name": "Malloc0", 00:17:57.140 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:17:57.140 "eui64": "ABCDEF0123456789", 00:17:57.140 "uuid": "49079c09-2a13-4da1-83bf-ce60ebb856d8" 00:17:57.140 } 00:17:57.140 ] 00:17:57.140 } 00:17:57.140 ] 00:17:57.140 06:10:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.140 06:10:22 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:17:57.140 [2024-10-01 06:10:22.661783] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:17:57.140 [2024-10-01 06:10:22.662017] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87845 ] 00:17:57.405 [2024-10-01 06:10:22.799042] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:17:57.405 [2024-10-01 06:10:22.799117] nvme_tcp.c:2349:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:17:57.405 [2024-10-01 06:10:22.799123] nvme_tcp.c:2353:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:17:57.405 [2024-10-01 06:10:22.799135] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:17:57.405 [2024-10-01 06:10:22.799144] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:17:57.405 [2024-10-01 06:10:22.799442] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:17:57.405 [2024-10-01 06:10:22.799510] nvme_tcp.c:1566:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xe21ac0 0 00:17:57.405 [2024-10-01 06:10:22.812962] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:17:57.405 [2024-10-01 06:10:22.812986] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:17:57.405 [2024-10-01 06:10:22.813008] nvme_tcp.c:1612:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:17:57.405 [2024-10-01 06:10:22.813012] nvme_tcp.c:1613:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:17:57.405 [2024-10-01 06:10:22.813043] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:57.405 [2024-10-01 06:10:22.813051] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:57.405 [2024-10-01 06:10:22.813055] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xe21ac0) 00:17:57.405 [2024-10-01 06:10:22.813069] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:17:57.405 [2024-10-01 06:10:22.813100] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe5a7c0, cid 0, qid 0 00:17:57.405 [2024-10-01 06:10:22.820942] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:57.405 [2024-10-01 06:10:22.820963] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:57.405 [2024-10-01 06:10:22.820985] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:57.405 [2024-10-01 06:10:22.820990] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe5a7c0) on tqpair=0xe21ac0 00:17:57.405 [2024-10-01 06:10:22.821001] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:17:57.405 [2024-10-01 06:10:22.821009] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:17:57.405 [2024-10-01 06:10:22.821021] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:17:57.405 [2024-10-01 06:10:22.821037] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:57.405 [2024-10-01 06:10:22.821042] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:57.405 [2024-10-01 06:10:22.821046] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xe21ac0) 00:17:57.405 [2024-10-01 06:10:22.821055] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.405 [2024-10-01 06:10:22.821082] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe5a7c0, cid 0, qid 0 00:17:57.405 [2024-10-01 06:10:22.821132] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:57.405 [2024-10-01 06:10:22.821139] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:57.405 [2024-10-01 06:10:22.821143] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:57.405 [2024-10-01 06:10:22.821147] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe5a7c0) on tqpair=0xe21ac0 00:17:57.405 [2024-10-01 06:10:22.821153] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:17:57.405 [2024-10-01 06:10:22.821160] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:17:57.406 [2024-10-01 06:10:22.821168] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:57.406 [2024-10-01 06:10:22.821172] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:57.406 [2024-10-01 06:10:22.821176] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xe21ac0) 00:17:57.406 [2024-10-01 06:10:22.821184] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.406 [2024-10-01 06:10:22.821219] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe5a7c0, cid 0, qid 0 00:17:57.406 [2024-10-01 06:10:22.821276] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:57.406 [2024-10-01 06:10:22.821283] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:57.406 [2024-10-01 06:10:22.821287] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:57.406 [2024-10-01 06:10:22.821291] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe5a7c0) on tqpair=0xe21ac0 00:17:57.406 [2024-10-01 06:10:22.821297] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:17:57.406 [2024-10-01 06:10:22.821306] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:17:57.406 [2024-10-01 06:10:22.821314] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:57.406 [2024-10-01 06:10:22.821318] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:57.406 [2024-10-01 06:10:22.821322] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xe21ac0) 00:17:57.406 [2024-10-01 06:10:22.821330] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.406 [2024-10-01 06:10:22.821349] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe5a7c0, cid 0, qid 0 00:17:57.406 [2024-10-01 06:10:22.821392] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:57.406 [2024-10-01 06:10:22.821399] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:57.406 [2024-10-01 06:10:22.821403] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:57.406 [2024-10-01 06:10:22.821407] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe5a7c0) on tqpair=0xe21ac0 00:17:57.406 [2024-10-01 06:10:22.821413] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:17:57.406 [2024-10-01 06:10:22.821423] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:57.406 [2024-10-01 06:10:22.821428] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:57.406 [2024-10-01 06:10:22.821432] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xe21ac0) 00:17:57.406 [2024-10-01 06:10:22.821440] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.406 [2024-10-01 06:10:22.821459] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe5a7c0, cid 0, qid 0 00:17:57.406 [2024-10-01 06:10:22.821505] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:57.406 [2024-10-01 06:10:22.821511] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:57.406 [2024-10-01 06:10:22.821515] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:57.406 [2024-10-01 06:10:22.821519] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe5a7c0) on tqpair=0xe21ac0 00:17:57.406 [2024-10-01 06:10:22.821525] nvme_ctrlr.c:3893:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:17:57.406 [2024-10-01 06:10:22.821530] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:17:57.406 [2024-10-01 06:10:22.821538] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:17:57.406 [2024-10-01 06:10:22.821644] nvme_ctrlr.c:4091:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:17:57.406 [2024-10-01 06:10:22.821649] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:17:57.406 [2024-10-01 06:10:22.821659] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:57.406 [2024-10-01 06:10:22.821664] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:57.406 [2024-10-01 06:10:22.821668] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xe21ac0) 00:17:57.406 [2024-10-01 06:10:22.821675] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.406 [2024-10-01 06:10:22.821694] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe5a7c0, cid 0, qid 0 00:17:57.406 [2024-10-01 06:10:22.821752] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:57.406 [2024-10-01 06:10:22.821759] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:57.406 [2024-10-01 06:10:22.821763] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:57.406 [2024-10-01 06:10:22.821767] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe5a7c0) on tqpair=0xe21ac0 00:17:57.406 [2024-10-01 06:10:22.821773] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:17:57.406 [2024-10-01 06:10:22.821783] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:57.406 [2024-10-01 06:10:22.821788] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:57.406 [2024-10-01 06:10:22.821792] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xe21ac0) 00:17:57.406 [2024-10-01 06:10:22.821799] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.406 [2024-10-01 06:10:22.821817] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe5a7c0, cid 0, qid 0 00:17:57.406 [2024-10-01 06:10:22.821863] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:57.406 [2024-10-01 06:10:22.821870] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:57.406 [2024-10-01 06:10:22.821873] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:57.406 [2024-10-01 06:10:22.821878] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe5a7c0) on tqpair=0xe21ac0 00:17:57.406 [2024-10-01 06:10:22.821883] nvme_ctrlr.c:3928:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:17:57.406 [2024-10-01 06:10:22.821888] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:17:57.406 [2024-10-01 06:10:22.821896] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:17:57.406 [2024-10-01 06:10:22.821927] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:17:57.406 [2024-10-01 06:10:22.821938] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:57.406 [2024-10-01 06:10:22.821943] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xe21ac0) 00:17:57.406 [2024-10-01 06:10:22.821951] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.406 [2024-10-01 06:10:22.821985] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe5a7c0, cid 0, qid 0 00:17:57.406 [2024-10-01 06:10:22.822075] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:57.406 [2024-10-01 06:10:22.822083] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:57.406 [2024-10-01 06:10:22.822087] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:57.406 [2024-10-01 06:10:22.822091] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xe21ac0): datao=0, datal=4096, cccid=0 00:17:57.406 [2024-10-01 06:10:22.822097] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xe5a7c0) on tqpair(0xe21ac0): expected_datao=0, payload_size=4096 00:17:57.406 [2024-10-01 06:10:22.822102] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:57.406 [2024-10-01 06:10:22.822111] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:57.406 [2024-10-01 06:10:22.822115] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:57.406 [2024-10-01 06:10:22.822125] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:57.406 [2024-10-01 06:10:22.822131] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:57.406 [2024-10-01 06:10:22.822135] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:57.406 [2024-10-01 06:10:22.822139] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe5a7c0) on tqpair=0xe21ac0 00:17:57.406 [2024-10-01 06:10:22.822148] nvme_ctrlr.c:2077:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:17:57.406 [2024-10-01 06:10:22.822154] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:17:57.406 [2024-10-01 06:10:22.822158] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:17:57.406 [2024-10-01 06:10:22.822164] nvme_ctrlr.c:2108:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:17:57.406 [2024-10-01 06:10:22.822169] nvme_ctrlr.c:2123:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:17:57.406 [2024-10-01 06:10:22.822174] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:17:57.406 [2024-10-01 06:10:22.822183] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:17:57.406 [2024-10-01 06:10:22.822196] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:57.406 [2024-10-01 06:10:22.822201] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:57.406 [2024-10-01 06:10:22.822205] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xe21ac0) 00:17:57.406 [2024-10-01 06:10:22.822214] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:57.406 [2024-10-01 06:10:22.822235] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe5a7c0, cid 0, qid 0 00:17:57.406 [2024-10-01 06:10:22.822289] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:57.406 [2024-10-01 06:10:22.822296] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:57.406 [2024-10-01 06:10:22.822300] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:57.406 [2024-10-01 06:10:22.822304] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe5a7c0) on tqpair=0xe21ac0 00:17:57.406 [2024-10-01 06:10:22.822313] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:57.406 [2024-10-01 06:10:22.822317] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:57.406 [2024-10-01 06:10:22.822321] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xe21ac0) 00:17:57.406 [2024-10-01 06:10:22.822328] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:57.406 [2024-10-01 06:10:22.822335] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:57.406 [2024-10-01 06:10:22.822340] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:57.406 [2024-10-01 06:10:22.822358] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xe21ac0) 00:17:57.406 [2024-10-01 06:10:22.822365] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:57.406 [2024-10-01 06:10:22.822371] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:57.406 [2024-10-01 06:10:22.822375] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:57.407 [2024-10-01 06:10:22.822379] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xe21ac0) 00:17:57.407 [2024-10-01 06:10:22.822385] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:57.407 [2024-10-01 06:10:22.822391] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:57.407 [2024-10-01 06:10:22.822395] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:57.407 [2024-10-01 06:10:22.822399] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe21ac0) 00:17:57.407 [2024-10-01 06:10:22.822405] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:57.407 [2024-10-01 06:10:22.822410] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:17:57.407 [2024-10-01 06:10:22.822423] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:17:57.407 [2024-10-01 06:10:22.822431] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:57.407 [2024-10-01 06:10:22.822435] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xe21ac0) 00:17:57.407 [2024-10-01 06:10:22.822442] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.407 [2024-10-01 06:10:22.822463] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe5a7c0, cid 0, qid 0 00:17:57.407 [2024-10-01 06:10:22.822470] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe5a940, cid 1, qid 0 00:17:57.407 [2024-10-01 06:10:22.822475] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe5aac0, cid 2, qid 0 00:17:57.407 [2024-10-01 06:10:22.822480] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe5ac40, cid 3, qid 0 00:17:57.407 [2024-10-01 06:10:22.822485] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe5adc0, cid 4, qid 0 00:17:57.407 [2024-10-01 06:10:22.822567] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:57.407 [2024-10-01 06:10:22.822576] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:57.407 [2024-10-01 06:10:22.822580] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:57.407 [2024-10-01 06:10:22.822584] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe5adc0) on tqpair=0xe21ac0 00:17:57.407 [2024-10-01 06:10:22.822589] nvme_ctrlr.c:3046:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:17:57.407 [2024-10-01 06:10:22.822595] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:17:57.407 [2024-10-01 06:10:22.822607] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:57.407 [2024-10-01 06:10:22.822612] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xe21ac0) 00:17:57.407 [2024-10-01 06:10:22.822619] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.407 [2024-10-01 06:10:22.822638] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe5adc0, cid 4, qid 0 00:17:57.407 [2024-10-01 06:10:22.822696] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:57.407 [2024-10-01 06:10:22.822703] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:57.407 [2024-10-01 06:10:22.822707] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:57.407 [2024-10-01 06:10:22.822711] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xe21ac0): datao=0, datal=4096, cccid=4 00:17:57.407 [2024-10-01 06:10:22.822716] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xe5adc0) on tqpair(0xe21ac0): expected_datao=0, payload_size=4096 00:17:57.407 [2024-10-01 06:10:22.822721] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:57.407 [2024-10-01 06:10:22.822728] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:57.407 [2024-10-01 06:10:22.822732] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:57.407 [2024-10-01 06:10:22.822741] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:57.407 [2024-10-01 06:10:22.822747] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:57.407 [2024-10-01 06:10:22.822750] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:57.407 [2024-10-01 06:10:22.822755] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe5adc0) on tqpair=0xe21ac0 00:17:57.407 [2024-10-01 06:10:22.822767] nvme_ctrlr.c:4189:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:17:57.407 [2024-10-01 06:10:22.822795] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:57.407 [2024-10-01 06:10:22.822801] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xe21ac0) 00:17:57.407 [2024-10-01 06:10:22.822809] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.407 [2024-10-01 06:10:22.822816] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:57.407 [2024-10-01 06:10:22.822821] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:57.407 [2024-10-01 06:10:22.822824] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xe21ac0) 00:17:57.407 [2024-10-01 06:10:22.822831] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:17:57.407 [2024-10-01 06:10:22.822871] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe5adc0, cid 4, qid 0 00:17:57.407 [2024-10-01 06:10:22.822880] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe5af40, cid 5, qid 0 00:17:57.407 [2024-10-01 06:10:22.823002] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:57.407 [2024-10-01 06:10:22.823012] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:57.407 [2024-10-01 06:10:22.823016] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:57.407 [2024-10-01 06:10:22.823020] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xe21ac0): datao=0, datal=1024, cccid=4 00:17:57.407 [2024-10-01 06:10:22.823026] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xe5adc0) on tqpair(0xe21ac0): expected_datao=0, payload_size=1024 00:17:57.407 [2024-10-01 06:10:22.823031] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:57.407 [2024-10-01 06:10:22.823038] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:57.407 [2024-10-01 06:10:22.823042] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:57.407 [2024-10-01 06:10:22.823049] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:57.407 [2024-10-01 06:10:22.823055] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:57.407 [2024-10-01 06:10:22.823059] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:57.407 [2024-10-01 06:10:22.823063] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe5af40) on tqpair=0xe21ac0 00:17:57.407 [2024-10-01 06:10:22.823083] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:57.407 [2024-10-01 06:10:22.823091] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:57.407 [2024-10-01 06:10:22.823095] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:57.407 [2024-10-01 06:10:22.823099] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe5adc0) on tqpair=0xe21ac0 00:17:57.407 [2024-10-01 06:10:22.823112] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:57.407 [2024-10-01 06:10:22.823117] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xe21ac0) 00:17:57.407 [2024-10-01 06:10:22.823125] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.407 [2024-10-01 06:10:22.823151] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe5adc0, cid 4, qid 0 00:17:57.407 [2024-10-01 06:10:22.823219] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:57.407 [2024-10-01 06:10:22.823227] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:57.407 [2024-10-01 06:10:22.823231] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:57.407 [2024-10-01 06:10:22.823235] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xe21ac0): datao=0, datal=3072, cccid=4 00:17:57.407 [2024-10-01 06:10:22.823240] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xe5adc0) on tqpair(0xe21ac0): expected_datao=0, payload_size=3072 00:17:57.407 [2024-10-01 06:10:22.823245] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:57.407 [2024-10-01 06:10:22.823267] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:57.407 [2024-10-01 06:10:22.823272] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:57.407 [2024-10-01 06:10:22.823280] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:57.407 [2024-10-01 06:10:22.823287] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:57.407 [2024-10-01 06:10:22.823290] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:57.407 [2024-10-01 06:10:22.823295] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe5adc0) on tqpair=0xe21ac0 00:17:57.407 [2024-10-01 06:10:22.823304] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:57.407 [2024-10-01 06:10:22.823309] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xe21ac0) 00:17:57.407 [2024-10-01 06:10:22.823332] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.407 [2024-10-01 06:10:22.823355] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe5adc0, cid 4, qid 0 00:17:57.407 [2024-10-01 06:10:22.823417] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:57.407 [2024-10-01 06:10:22.823424] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:57.407 [2024-10-01 06:10:22.823428] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:57.407 [2024-10-01 06:10:22.823432] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xe21ac0): datao=0, datal=8, cccid=4 00:17:57.407 [2024-10-01 06:10:22.823436] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xe5adc0) on tqpair(0xe21ac0): expected_datao=0, payload_size=8 00:17:57.407 [2024-10-01 06:10:22.823441] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:57.407 [2024-10-01 06:10:22.823448] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:57.407 [2024-10-01 06:10:22.823452] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:57.407 ===================================================== 00:17:57.407 NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2014-08.org.nvmexpress.discovery 00:17:57.407 ===================================================== 00:17:57.407 Controller Capabilities/Features 00:17:57.407 ================================ 00:17:57.407 Vendor ID: 0000 00:17:57.407 Subsystem Vendor ID: 0000 00:17:57.407 Serial Number: .................... 00:17:57.407 Model Number: ........................................ 00:17:57.407 Firmware Version: 25.01 00:17:57.407 Recommended Arb Burst: 0 00:17:57.407 IEEE OUI Identifier: 00 00 00 00:17:57.407 Multi-path I/O 00:17:57.407 May have multiple subsystem ports: No 00:17:57.407 May have multiple controllers: No 00:17:57.407 Associated with SR-IOV VF: No 00:17:57.407 Max Data Transfer Size: 131072 00:17:57.407 Max Number of Namespaces: 0 00:17:57.407 Max Number of I/O Queues: 1024 00:17:57.408 NVMe Specification Version (VS): 1.3 00:17:57.408 NVMe Specification Version (Identify): 1.3 00:17:57.408 Maximum Queue Entries: 128 00:17:57.408 Contiguous Queues Required: Yes 00:17:57.408 Arbitration Mechanisms Supported 00:17:57.408 Weighted Round Robin: Not Supported 00:17:57.408 Vendor Specific: Not Supported 00:17:57.408 Reset Timeout: 15000 ms 00:17:57.408 Doorbell Stride: 4 bytes 00:17:57.408 NVM Subsystem Reset: Not Supported 00:17:57.408 Command Sets Supported 00:17:57.408 NVM Command Set: Supported 00:17:57.408 Boot Partition: Not Supported 00:17:57.408 Memory Page Size Minimum: 4096 bytes 00:17:57.408 Memory Page Size Maximum: 4096 bytes 00:17:57.408 Persistent Memory Region: Not Supported 00:17:57.408 Optional Asynchronous Events Supported 00:17:57.408 Namespace Attribute Notices: Not Supported 00:17:57.408 Firmware Activation Notices: Not Supported 00:17:57.408 ANA Change Notices: Not Supported 00:17:57.408 PLE Aggregate Log Change Notices: Not Supported 00:17:57.408 LBA Status Info Alert Notices: Not Supported 00:17:57.408 EGE Aggregate Log Change Notices: Not Supported 00:17:57.408 Normal NVM Subsystem Shutdown event: Not Supported 00:17:57.408 Zone Descriptor Change Notices: Not Supported 00:17:57.408 Discovery Log Change Notices: Supported 00:17:57.408 Controller Attributes 00:17:57.408 128-bit Host Identifier: Not Supported 00:17:57.408 Non-Operational Permissive Mode: Not Supported 00:17:57.408 NVM Sets: Not Supported 00:17:57.408 Read Recovery Levels: Not Supported 00:17:57.408 Endurance Groups: Not Supported 00:17:57.408 Predictable Latency Mode: Not Supported 00:17:57.408 Traffic Based Keep ALive: Not Supported 00:17:57.408 Namespace Granularity: Not Supported 00:17:57.408 SQ Associations: Not Supported 00:17:57.408 UUID List: Not Supported 00:17:57.408 Multi-Domain Subsystem: Not Supported 00:17:57.408 Fixed Capacity Management: Not Supported 00:17:57.408 Variable Capacity Management: Not Supported 00:17:57.408 Delete Endurance Group: Not Supported 00:17:57.408 Delete NVM Set: Not Supported 00:17:57.408 Extended LBA Formats Supported: Not Supported 00:17:57.408 Flexible Data Placement Supported: Not Supported 00:17:57.408 00:17:57.408 Controller Memory Buffer Support 00:17:57.408 ================================ 00:17:57.408 Supported: No 00:17:57.408 00:17:57.408 Persistent Memory Region Support 00:17:57.408 ================================ 00:17:57.408 Supported: No 00:17:57.408 00:17:57.408 Admin Command Set Attributes 00:17:57.408 ============================ 00:17:57.408 Security Send/Receive: Not Supported 00:17:57.408 Format NVM: Not Supported 00:17:57.408 Firmware Activate/Download: Not Supported 00:17:57.408 Namespace Management: Not Supported 00:17:57.408 Device Self-Test: Not Supported 00:17:57.408 Directives: Not Supported 00:17:57.408 NVMe-MI: Not Supported 00:17:57.408 Virtualization Management: Not Supported 00:17:57.408 Doorbell Buffer Config: Not Supported 00:17:57.408 Get LBA Status Capability: Not Supported 00:17:57.408 Command & Feature Lockdown Capability: Not Supported 00:17:57.408 Abort Command Limit: 1 00:17:57.408 Async Event Request Limit: 4 00:17:57.408 Number of Firmware Slots: N/A 00:17:57.408 Firmware Slot 1 Read-Only: N/A 00:17:57.408 Firm[2024-10-01 06:10:22.823467] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:57.408 [2024-10-01 06:10:22.823474] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:57.408 [2024-10-01 06:10:22.823478] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:57.408 [2024-10-01 06:10:22.823482] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe5adc0) on tqpair=0xe21ac0 00:17:57.408 ware Activation Without Reset: N/A 00:17:57.408 Multiple Update Detection Support: N/A 00:17:57.408 Firmware Update Granularity: No Information Provided 00:17:57.408 Per-Namespace SMART Log: No 00:17:57.408 Asymmetric Namespace Access Log Page: Not Supported 00:17:57.408 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:17:57.408 Command Effects Log Page: Not Supported 00:17:57.408 Get Log Page Extended Data: Supported 00:17:57.408 Telemetry Log Pages: Not Supported 00:17:57.408 Persistent Event Log Pages: Not Supported 00:17:57.408 Supported Log Pages Log Page: May Support 00:17:57.408 Commands Supported & Effects Log Page: Not Supported 00:17:57.408 Feature Identifiers & Effects Log Page:May Support 00:17:57.408 NVMe-MI Commands & Effects Log Page: May Support 00:17:57.408 Data Area 4 for Telemetry Log: Not Supported 00:17:57.408 Error Log Page Entries Supported: 128 00:17:57.408 Keep Alive: Not Supported 00:17:57.408 00:17:57.408 NVM Command Set Attributes 00:17:57.408 ========================== 00:17:57.408 Submission Queue Entry Size 00:17:57.408 Max: 1 00:17:57.408 Min: 1 00:17:57.408 Completion Queue Entry Size 00:17:57.408 Max: 1 00:17:57.408 Min: 1 00:17:57.408 Number of Namespaces: 0 00:17:57.408 Compare Command: Not Supported 00:17:57.408 Write Uncorrectable Command: Not Supported 00:17:57.408 Dataset Management Command: Not Supported 00:17:57.408 Write Zeroes Command: Not Supported 00:17:57.408 Set Features Save Field: Not Supported 00:17:57.408 Reservations: Not Supported 00:17:57.408 Timestamp: Not Supported 00:17:57.408 Copy: Not Supported 00:17:57.408 Volatile Write Cache: Not Present 00:17:57.408 Atomic Write Unit (Normal): 1 00:17:57.408 Atomic Write Unit (PFail): 1 00:17:57.408 Atomic Compare & Write Unit: 1 00:17:57.408 Fused Compare & Write: Supported 00:17:57.408 Scatter-Gather List 00:17:57.408 SGL Command Set: Supported 00:17:57.408 SGL Keyed: Supported 00:17:57.408 SGL Bit Bucket Descriptor: Not Supported 00:17:57.408 SGL Metadata Pointer: Not Supported 00:17:57.408 Oversized SGL: Not Supported 00:17:57.408 SGL Metadata Address: Not Supported 00:17:57.408 SGL Offset: Supported 00:17:57.408 Transport SGL Data Block: Not Supported 00:17:57.408 Replay Protected Memory Block: Not Supported 00:17:57.408 00:17:57.408 Firmware Slot Information 00:17:57.408 ========================= 00:17:57.408 Active slot: 0 00:17:57.408 00:17:57.408 00:17:57.408 Error Log 00:17:57.408 ========= 00:17:57.408 00:17:57.408 Active Namespaces 00:17:57.408 ================= 00:17:57.408 Discovery Log Page 00:17:57.408 ================== 00:17:57.408 Generation Counter: 2 00:17:57.408 Number of Records: 2 00:17:57.408 Record Format: 0 00:17:57.408 00:17:57.408 Discovery Log Entry 0 00:17:57.408 ---------------------- 00:17:57.408 Transport Type: 3 (TCP) 00:17:57.408 Address Family: 1 (IPv4) 00:17:57.408 Subsystem Type: 3 (Current Discovery Subsystem) 00:17:57.408 Entry Flags: 00:17:57.408 Duplicate Returned Information: 1 00:17:57.408 Explicit Persistent Connection Support for Discovery: 1 00:17:57.408 Transport Requirements: 00:17:57.408 Secure Channel: Not Required 00:17:57.408 Port ID: 0 (0x0000) 00:17:57.408 Controller ID: 65535 (0xffff) 00:17:57.408 Admin Max SQ Size: 128 00:17:57.408 Transport Service Identifier: 4420 00:17:57.408 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:17:57.408 Transport Address: 10.0.0.3 00:17:57.408 Discovery Log Entry 1 00:17:57.408 ---------------------- 00:17:57.408 Transport Type: 3 (TCP) 00:17:57.408 Address Family: 1 (IPv4) 00:17:57.408 Subsystem Type: 2 (NVM Subsystem) 00:17:57.408 Entry Flags: 00:17:57.408 Duplicate Returned Information: 0 00:17:57.408 Explicit Persistent Connection Support for Discovery: 0 00:17:57.408 Transport Requirements: 00:17:57.408 Secure Channel: Not Required 00:17:57.408 Port ID: 0 (0x0000) 00:17:57.408 Controller ID: 65535 (0xffff) 00:17:57.408 Admin Max SQ Size: 128 00:17:57.408 Transport Service Identifier: 4420 00:17:57.408 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:17:57.408 Transport Address: 10.0.0.3 [2024-10-01 06:10:22.823573] nvme_ctrlr.c:4386:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:17:57.408 [2024-10-01 06:10:22.823586] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe5a7c0) on tqpair=0xe21ac0 00:17:57.408 [2024-10-01 06:10:22.823593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.408 [2024-10-01 06:10:22.823599] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe5a940) on tqpair=0xe21ac0 00:17:57.408 [2024-10-01 06:10:22.823603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.408 [2024-10-01 06:10:22.823609] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe5aac0) on tqpair=0xe21ac0 00:17:57.408 [2024-10-01 06:10:22.823613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.408 [2024-10-01 06:10:22.823619] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe5ac40) on tqpair=0xe21ac0 00:17:57.408 [2024-10-01 06:10:22.823624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.408 [2024-10-01 06:10:22.823633] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:57.408 [2024-10-01 06:10:22.823638] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:57.409 [2024-10-01 06:10:22.823642] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe21ac0) 00:17:57.409 [2024-10-01 06:10:22.823650] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.409 [2024-10-01 06:10:22.823672] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe5ac40, cid 3, qid 0 00:17:57.409 [2024-10-01 06:10:22.823718] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:57.409 [2024-10-01 06:10:22.823725] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:57.409 [2024-10-01 06:10:22.823739] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:57.409 [2024-10-01 06:10:22.823759] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe5ac40) on tqpair=0xe21ac0 00:17:57.409 [2024-10-01 06:10:22.823768] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:57.409 [2024-10-01 06:10:22.823773] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:57.409 [2024-10-01 06:10:22.823777] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe21ac0) 00:17:57.409 [2024-10-01 06:10:22.823785] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.409 [2024-10-01 06:10:22.823811] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe5ac40, cid 3, qid 0 00:17:57.409 [2024-10-01 06:10:22.823871] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:57.409 [2024-10-01 06:10:22.823878] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:57.409 [2024-10-01 06:10:22.823882] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:57.409 [2024-10-01 06:10:22.823886] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe5ac40) on tqpair=0xe21ac0 00:17:57.409 [2024-10-01 06:10:22.823892] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:17:57.409 [2024-10-01 06:10:22.823897] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:17:57.409 [2024-10-01 06:10:22.823908] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:57.409 [2024-10-01 06:10:22.823927] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:57.409 [2024-10-01 06:10:22.823933] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe21ac0) 00:17:57.409 [2024-10-01 06:10:22.823941] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.409 [2024-10-01 06:10:22.823962] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe5ac40, cid 3, qid 0 00:17:57.409 [2024-10-01 06:10:22.824015] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:57.409 [2024-10-01 06:10:22.824022] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:57.409 [2024-10-01 06:10:22.824026] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:57.409 [2024-10-01 06:10:22.824031] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe5ac40) on tqpair=0xe21ac0 00:17:57.409 [2024-10-01 06:10:22.824042] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:57.409 [2024-10-01 06:10:22.824047] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:57.409 [2024-10-01 06:10:22.824051] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe21ac0) 00:17:57.409 [2024-10-01 06:10:22.824059] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.409 [2024-10-01 06:10:22.824078] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe5ac40, cid 3, qid 0 00:17:57.409 [2024-10-01 06:10:22.824121] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:57.409 [2024-10-01 06:10:22.824128] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:57.409 [2024-10-01 06:10:22.824132] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:57.409 [2024-10-01 06:10:22.824136] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe5ac40) on tqpair=0xe21ac0 00:17:57.409 [2024-10-01 06:10:22.824147] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:57.409 [2024-10-01 06:10:22.824152] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:57.409 [2024-10-01 06:10:22.824156] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe21ac0) 00:17:57.409 [2024-10-01 06:10:22.824164] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.409 [2024-10-01 06:10:22.824182] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe5ac40, cid 3, qid 0 00:17:57.409 [2024-10-01 06:10:22.824233] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:57.409 [2024-10-01 06:10:22.824242] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:57.409 [2024-10-01 06:10:22.824246] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:57.409 [2024-10-01 06:10:22.824250] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe5ac40) on tqpair=0xe21ac0 00:17:57.409 [2024-10-01 06:10:22.824276] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:57.409 [2024-10-01 06:10:22.824281] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:57.409 [2024-10-01 06:10:22.824285] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe21ac0) 00:17:57.409 [2024-10-01 06:10:22.824292] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.409 [2024-10-01 06:10:22.824311] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe5ac40, cid 3, qid 0 00:17:57.409 [2024-10-01 06:10:22.824355] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:57.409 [2024-10-01 06:10:22.824362] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:57.409 [2024-10-01 06:10:22.824365] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:57.409 [2024-10-01 06:10:22.824370] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe5ac40) on tqpair=0xe21ac0 00:17:57.409 [2024-10-01 06:10:22.824380] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:57.409 [2024-10-01 06:10:22.824385] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:57.409 [2024-10-01 06:10:22.824389] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe21ac0) 00:17:57.409 [2024-10-01 06:10:22.824397] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.409 [2024-10-01 06:10:22.824414] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe5ac40, cid 3, qid 0 00:17:57.409 [2024-10-01 06:10:22.824461] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:57.409 [2024-10-01 06:10:22.824468] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:57.409 [2024-10-01 06:10:22.824472] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:57.409 [2024-10-01 06:10:22.824476] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe5ac40) on tqpair=0xe21ac0 00:17:57.409 [2024-10-01 06:10:22.824486] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:57.409 [2024-10-01 06:10:22.824491] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:57.409 [2024-10-01 06:10:22.824495] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe21ac0) 00:17:57.409 [2024-10-01 06:10:22.824502] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.409 [2024-10-01 06:10:22.824520] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe5ac40, cid 3, qid 0 00:17:57.409 [2024-10-01 06:10:22.824568] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:57.409 [2024-10-01 06:10:22.824575] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:57.409 [2024-10-01 06:10:22.824578] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:57.409 [2024-10-01 06:10:22.824583] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe5ac40) on tqpair=0xe21ac0 00:17:57.409 [2024-10-01 06:10:22.824593] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:57.409 [2024-10-01 06:10:22.824598] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:57.409 [2024-10-01 06:10:22.824602] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe21ac0) 00:17:57.409 [2024-10-01 06:10:22.824609] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.409 [2024-10-01 06:10:22.824627] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe5ac40, cid 3, qid 0 00:17:57.409 [2024-10-01 06:10:22.824670] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:57.409 [2024-10-01 06:10:22.824677] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:57.409 [2024-10-01 06:10:22.824681] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:57.409 [2024-10-01 06:10:22.824685] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe5ac40) on tqpair=0xe21ac0 00:17:57.409 [2024-10-01 06:10:22.824696] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:57.409 [2024-10-01 06:10:22.824701] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:57.409 [2024-10-01 06:10:22.824704] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe21ac0) 00:17:57.409 [2024-10-01 06:10:22.824712] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.409 [2024-10-01 06:10:22.824729] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe5ac40, cid 3, qid 0 00:17:57.409 [2024-10-01 06:10:22.824775] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:57.409 [2024-10-01 06:10:22.824782] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:57.409 [2024-10-01 06:10:22.824786] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:57.409 [2024-10-01 06:10:22.824790] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe5ac40) on tqpair=0xe21ac0 00:17:57.409 [2024-10-01 06:10:22.824801] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:57.409 [2024-10-01 06:10:22.824806] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:57.409 [2024-10-01 06:10:22.824810] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe21ac0) 00:17:57.409 [2024-10-01 06:10:22.824817] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.409 [2024-10-01 06:10:22.824834] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe5ac40, cid 3, qid 0 00:17:57.409 [2024-10-01 06:10:22.824875] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:57.409 [2024-10-01 06:10:22.824882] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:57.409 [2024-10-01 06:10:22.824885] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:57.409 [2024-10-01 06:10:22.824890] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe5ac40) on tqpair=0xe21ac0 00:17:57.409 [2024-10-01 06:10:22.824900] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:57.409 [2024-10-01 06:10:22.824905] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:57.409 [2024-10-01 06:10:22.824924] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xe21ac0) 00:17:57.409 [2024-10-01 06:10:22.828966] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.409 [2024-10-01 06:10:22.828996] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xe5ac40, cid 3, qid 0 00:17:57.409 [2024-10-01 06:10:22.829042] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:57.409 [2024-10-01 06:10:22.829050] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:57.409 [2024-10-01 06:10:22.829054] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:57.409 [2024-10-01 06:10:22.829058] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xe5ac40) on tqpair=0xe21ac0 00:17:57.410 [2024-10-01 06:10:22.829067] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 5 milliseconds 00:17:57.410 00:17:57.410 06:10:22 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:17:57.410 [2024-10-01 06:10:22.866594] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:17:57.410 [2024-10-01 06:10:22.866647] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87847 ] 00:17:57.410 [2024-10-01 06:10:23.002957] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:17:57.410 [2024-10-01 06:10:23.007045] nvme_tcp.c:2349:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:17:57.410 [2024-10-01 06:10:23.007055] nvme_tcp.c:2353:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:17:57.410 [2024-10-01 06:10:23.007067] nvme_tcp.c:2374:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:17:57.410 [2024-10-01 06:10:23.007075] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:17:57.410 [2024-10-01 06:10:23.007424] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:17:57.410 [2024-10-01 06:10:23.007485] nvme_tcp.c:1566:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xfabac0 0 00:17:57.675 [2024-10-01 06:10:23.021980] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:17:57.675 [2024-10-01 06:10:23.022006] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:17:57.675 [2024-10-01 06:10:23.022012] nvme_tcp.c:1612:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:17:57.675 [2024-10-01 06:10:23.022016] nvme_tcp.c:1613:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:17:57.675 [2024-10-01 06:10:23.022045] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:57.675 [2024-10-01 06:10:23.022052] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:57.675 [2024-10-01 06:10:23.022056] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xfabac0) 00:17:57.675 [2024-10-01 06:10:23.022069] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:17:57.675 [2024-10-01 06:10:23.022099] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfe47c0, cid 0, qid 0 00:17:57.675 [2024-10-01 06:10:23.029067] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:57.675 [2024-10-01 06:10:23.029123] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:57.675 [2024-10-01 06:10:23.029129] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:57.675 [2024-10-01 06:10:23.029135] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfe47c0) on tqpair=0xfabac0 00:17:57.675 [2024-10-01 06:10:23.029148] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:17:57.675 [2024-10-01 06:10:23.029157] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:17:57.675 [2024-10-01 06:10:23.029164] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:17:57.675 [2024-10-01 06:10:23.029179] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:57.675 [2024-10-01 06:10:23.029185] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:57.675 [2024-10-01 06:10:23.029189] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xfabac0) 00:17:57.675 [2024-10-01 06:10:23.029199] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.675 [2024-10-01 06:10:23.029228] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfe47c0, cid 0, qid 0 00:17:57.675 [2024-10-01 06:10:23.029313] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:57.675 [2024-10-01 06:10:23.029322] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:57.675 [2024-10-01 06:10:23.029326] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:57.675 [2024-10-01 06:10:23.029330] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfe47c0) on tqpair=0xfabac0 00:17:57.675 [2024-10-01 06:10:23.029336] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:17:57.675 [2024-10-01 06:10:23.029345] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:17:57.675 [2024-10-01 06:10:23.029353] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:57.675 [2024-10-01 06:10:23.029358] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:57.675 [2024-10-01 06:10:23.029362] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xfabac0) 00:17:57.675 [2024-10-01 06:10:23.029370] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.675 [2024-10-01 06:10:23.029390] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfe47c0, cid 0, qid 0 00:17:57.675 [2024-10-01 06:10:23.029463] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:57.675 [2024-10-01 06:10:23.029471] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:57.675 [2024-10-01 06:10:23.029475] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:57.675 [2024-10-01 06:10:23.029479] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfe47c0) on tqpair=0xfabac0 00:17:57.675 [2024-10-01 06:10:23.029486] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:17:57.675 [2024-10-01 06:10:23.029495] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:17:57.675 [2024-10-01 06:10:23.029518] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:57.675 [2024-10-01 06:10:23.029523] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:57.675 [2024-10-01 06:10:23.029527] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xfabac0) 00:17:57.675 [2024-10-01 06:10:23.029534] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.675 [2024-10-01 06:10:23.029553] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfe47c0, cid 0, qid 0 00:17:57.675 [2024-10-01 06:10:23.029639] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:57.675 [2024-10-01 06:10:23.029646] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:57.675 [2024-10-01 06:10:23.029650] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:57.675 [2024-10-01 06:10:23.029655] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfe47c0) on tqpair=0xfabac0 00:17:57.675 [2024-10-01 06:10:23.029660] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:17:57.675 [2024-10-01 06:10:23.029670] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:57.675 [2024-10-01 06:10:23.029675] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:57.675 [2024-10-01 06:10:23.029679] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xfabac0) 00:17:57.675 [2024-10-01 06:10:23.029686] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.675 [2024-10-01 06:10:23.029703] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfe47c0, cid 0, qid 0 00:17:57.675 [2024-10-01 06:10:23.029795] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:57.675 [2024-10-01 06:10:23.029802] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:57.675 [2024-10-01 06:10:23.029805] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:57.675 [2024-10-01 06:10:23.029809] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfe47c0) on tqpair=0xfabac0 00:17:57.675 [2024-10-01 06:10:23.029814] nvme_ctrlr.c:3893:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:17:57.675 [2024-10-01 06:10:23.029819] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:17:57.675 [2024-10-01 06:10:23.029826] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:17:57.675 [2024-10-01 06:10:23.029944] nvme_ctrlr.c:4091:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:17:57.675 [2024-10-01 06:10:23.029973] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:17:57.675 [2024-10-01 06:10:23.029984] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:57.675 [2024-10-01 06:10:23.029989] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:57.675 [2024-10-01 06:10:23.029993] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xfabac0) 00:17:57.675 [2024-10-01 06:10:23.030001] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.675 [2024-10-01 06:10:23.030021] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfe47c0, cid 0, qid 0 00:17:57.675 [2024-10-01 06:10:23.030103] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:57.675 [2024-10-01 06:10:23.030111] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:57.675 [2024-10-01 06:10:23.030115] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:57.675 [2024-10-01 06:10:23.030119] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfe47c0) on tqpair=0xfabac0 00:17:57.676 [2024-10-01 06:10:23.030125] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:17:57.676 [2024-10-01 06:10:23.030135] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:57.676 [2024-10-01 06:10:23.030140] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:57.676 [2024-10-01 06:10:23.030144] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xfabac0) 00:17:57.676 [2024-10-01 06:10:23.030152] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.676 [2024-10-01 06:10:23.030170] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfe47c0, cid 0, qid 0 00:17:57.676 [2024-10-01 06:10:23.030242] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:57.676 [2024-10-01 06:10:23.030249] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:57.676 [2024-10-01 06:10:23.030253] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:57.676 [2024-10-01 06:10:23.030258] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfe47c0) on tqpair=0xfabac0 00:17:57.676 [2024-10-01 06:10:23.030262] nvme_ctrlr.c:3928:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:17:57.676 [2024-10-01 06:10:23.030268] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:17:57.676 [2024-10-01 06:10:23.030276] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:17:57.676 [2024-10-01 06:10:23.030291] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:17:57.676 [2024-10-01 06:10:23.030301] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:57.676 [2024-10-01 06:10:23.030305] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xfabac0) 00:17:57.676 [2024-10-01 06:10:23.030313] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.676 [2024-10-01 06:10:23.030333] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfe47c0, cid 0, qid 0 00:17:57.676 [2024-10-01 06:10:23.030490] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:57.676 [2024-10-01 06:10:23.030509] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:57.676 [2024-10-01 06:10:23.030514] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:57.676 [2024-10-01 06:10:23.030518] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xfabac0): datao=0, datal=4096, cccid=0 00:17:57.676 [2024-10-01 06:10:23.030523] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xfe47c0) on tqpair(0xfabac0): expected_datao=0, payload_size=4096 00:17:57.676 [2024-10-01 06:10:23.030529] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:57.676 [2024-10-01 06:10:23.030537] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:57.676 [2024-10-01 06:10:23.030541] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:57.676 [2024-10-01 06:10:23.030550] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:57.676 [2024-10-01 06:10:23.030556] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:57.676 [2024-10-01 06:10:23.030560] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:57.676 [2024-10-01 06:10:23.030564] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfe47c0) on tqpair=0xfabac0 00:17:57.676 [2024-10-01 06:10:23.030572] nvme_ctrlr.c:2077:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:17:57.676 [2024-10-01 06:10:23.030577] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:17:57.676 [2024-10-01 06:10:23.030582] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:17:57.676 [2024-10-01 06:10:23.030587] nvme_ctrlr.c:2108:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:17:57.676 [2024-10-01 06:10:23.030591] nvme_ctrlr.c:2123:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:17:57.676 [2024-10-01 06:10:23.030597] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:17:57.676 [2024-10-01 06:10:23.030605] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:17:57.676 [2024-10-01 06:10:23.030618] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:57.676 [2024-10-01 06:10:23.030623] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:57.676 [2024-10-01 06:10:23.030626] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xfabac0) 00:17:57.676 [2024-10-01 06:10:23.030635] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:57.676 [2024-10-01 06:10:23.030655] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfe47c0, cid 0, qid 0 00:17:57.676 [2024-10-01 06:10:23.030733] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:57.676 [2024-10-01 06:10:23.030740] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:57.676 [2024-10-01 06:10:23.030744] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:57.676 [2024-10-01 06:10:23.030748] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfe47c0) on tqpair=0xfabac0 00:17:57.676 [2024-10-01 06:10:23.030756] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:57.676 [2024-10-01 06:10:23.030760] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:57.676 [2024-10-01 06:10:23.030764] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xfabac0) 00:17:57.676 [2024-10-01 06:10:23.030771] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:17:57.676 [2024-10-01 06:10:23.030778] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:57.676 [2024-10-01 06:10:23.030782] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:57.676 [2024-10-01 06:10:23.030785] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xfabac0) 00:17:57.676 [2024-10-01 06:10:23.030791] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:17:57.676 [2024-10-01 06:10:23.030797] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:57.676 [2024-10-01 06:10:23.030801] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:57.676 [2024-10-01 06:10:23.030805] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xfabac0) 00:17:57.676 [2024-10-01 06:10:23.030811] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:17:57.676 [2024-10-01 06:10:23.030817] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:57.676 [2024-10-01 06:10:23.030821] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:57.676 [2024-10-01 06:10:23.030824] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfabac0) 00:17:57.676 [2024-10-01 06:10:23.030830] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:17:57.676 [2024-10-01 06:10:23.030835] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:17:57.676 [2024-10-01 06:10:23.030848] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:17:57.676 [2024-10-01 06:10:23.030856] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:57.676 [2024-10-01 06:10:23.030860] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xfabac0) 00:17:57.676 [2024-10-01 06:10:23.030867] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.676 [2024-10-01 06:10:23.030888] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfe47c0, cid 0, qid 0 00:17:57.676 [2024-10-01 06:10:23.030896] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfe4940, cid 1, qid 0 00:17:57.676 [2024-10-01 06:10:23.030934] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfe4ac0, cid 2, qid 0 00:17:57.676 [2024-10-01 06:10:23.030940] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfe4c40, cid 3, qid 0 00:17:57.676 [2024-10-01 06:10:23.030945] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfe4dc0, cid 4, qid 0 00:17:57.676 [2024-10-01 06:10:23.031112] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:57.676 [2024-10-01 06:10:23.031123] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:57.676 [2024-10-01 06:10:23.031128] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:57.676 [2024-10-01 06:10:23.031133] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfe4dc0) on tqpair=0xfabac0 00:17:57.676 [2024-10-01 06:10:23.031139] nvme_ctrlr.c:3046:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:17:57.676 [2024-10-01 06:10:23.031145] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:17:57.676 [2024-10-01 06:10:23.031158] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:17:57.676 [2024-10-01 06:10:23.031166] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:17:57.676 [2024-10-01 06:10:23.031174] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:57.676 [2024-10-01 06:10:23.031179] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:57.676 [2024-10-01 06:10:23.031183] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xfabac0) 00:17:57.676 [2024-10-01 06:10:23.031191] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:57.676 [2024-10-01 06:10:23.031213] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfe4dc0, cid 4, qid 0 00:17:57.676 [2024-10-01 06:10:23.031296] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:57.676 [2024-10-01 06:10:23.031319] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:57.676 [2024-10-01 06:10:23.031323] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:57.676 [2024-10-01 06:10:23.031328] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfe4dc0) on tqpair=0xfabac0 00:17:57.676 [2024-10-01 06:10:23.031426] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:17:57.676 [2024-10-01 06:10:23.031438] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:17:57.676 [2024-10-01 06:10:23.031446] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:57.676 [2024-10-01 06:10:23.031450] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xfabac0) 00:17:57.676 [2024-10-01 06:10:23.031458] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.676 [2024-10-01 06:10:23.031478] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfe4dc0, cid 4, qid 0 00:17:57.676 [2024-10-01 06:10:23.031572] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:57.676 [2024-10-01 06:10:23.031579] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:57.676 [2024-10-01 06:10:23.031583] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:57.676 [2024-10-01 06:10:23.031587] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xfabac0): datao=0, datal=4096, cccid=4 00:17:57.677 [2024-10-01 06:10:23.031592] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xfe4dc0) on tqpair(0xfabac0): expected_datao=0, payload_size=4096 00:17:57.677 [2024-10-01 06:10:23.031597] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:57.677 [2024-10-01 06:10:23.031605] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:57.677 [2024-10-01 06:10:23.031609] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:57.677 [2024-10-01 06:10:23.031618] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:57.677 [2024-10-01 06:10:23.031631] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:57.677 [2024-10-01 06:10:23.031635] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:57.677 [2024-10-01 06:10:23.031640] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfe4dc0) on tqpair=0xfabac0 00:17:57.677 [2024-10-01 06:10:23.031656] nvme_ctrlr.c:4722:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:17:57.677 [2024-10-01 06:10:23.031667] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:17:57.677 [2024-10-01 06:10:23.031677] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:17:57.677 [2024-10-01 06:10:23.031685] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:57.677 [2024-10-01 06:10:23.031690] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xfabac0) 00:17:57.677 [2024-10-01 06:10:23.031697] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.677 [2024-10-01 06:10:23.031717] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfe4dc0, cid 4, qid 0 00:17:57.677 [2024-10-01 06:10:23.031859] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:57.677 [2024-10-01 06:10:23.031867] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:57.677 [2024-10-01 06:10:23.031871] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:57.677 [2024-10-01 06:10:23.031876] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xfabac0): datao=0, datal=4096, cccid=4 00:17:57.677 [2024-10-01 06:10:23.031881] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xfe4dc0) on tqpair(0xfabac0): expected_datao=0, payload_size=4096 00:17:57.677 [2024-10-01 06:10:23.031886] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:57.677 [2024-10-01 06:10:23.031893] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:57.677 [2024-10-01 06:10:23.031898] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:57.677 [2024-10-01 06:10:23.031906] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:57.677 [2024-10-01 06:10:23.031913] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:57.677 [2024-10-01 06:10:23.031931] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:57.677 [2024-10-01 06:10:23.031936] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfe4dc0) on tqpair=0xfabac0 00:17:57.677 [2024-10-01 06:10:23.031948] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:17:57.677 [2024-10-01 06:10:23.031959] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:17:57.677 [2024-10-01 06:10:23.031969] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:57.677 [2024-10-01 06:10:23.031973] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xfabac0) 00:17:57.677 [2024-10-01 06:10:23.031982] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.677 [2024-10-01 06:10:23.032002] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfe4dc0, cid 4, qid 0 00:17:57.677 [2024-10-01 06:10:23.032117] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:57.677 [2024-10-01 06:10:23.032124] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:57.677 [2024-10-01 06:10:23.032129] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:57.677 [2024-10-01 06:10:23.032133] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xfabac0): datao=0, datal=4096, cccid=4 00:17:57.677 [2024-10-01 06:10:23.032137] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xfe4dc0) on tqpair(0xfabac0): expected_datao=0, payload_size=4096 00:17:57.677 [2024-10-01 06:10:23.032142] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:57.677 [2024-10-01 06:10:23.032150] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:57.677 [2024-10-01 06:10:23.032154] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:57.677 [2024-10-01 06:10:23.032163] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:57.677 [2024-10-01 06:10:23.032170] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:57.677 [2024-10-01 06:10:23.032174] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:57.677 [2024-10-01 06:10:23.032178] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfe4dc0) on tqpair=0xfabac0 00:17:57.677 [2024-10-01 06:10:23.032191] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:17:57.677 [2024-10-01 06:10:23.032201] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:17:57.677 [2024-10-01 06:10:23.032213] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:17:57.677 [2024-10-01 06:10:23.032220] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:17:57.677 [2024-10-01 06:10:23.032226] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:17:57.677 [2024-10-01 06:10:23.032247] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:17:57.677 [2024-10-01 06:10:23.032252] nvme_ctrlr.c:3134:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:17:57.677 [2024-10-01 06:10:23.032257] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:17:57.677 [2024-10-01 06:10:23.032262] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:17:57.677 [2024-10-01 06:10:23.032293] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:57.677 [2024-10-01 06:10:23.032314] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xfabac0) 00:17:57.677 [2024-10-01 06:10:23.032324] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.677 [2024-10-01 06:10:23.032336] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:57.677 [2024-10-01 06:10:23.032341] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:57.677 [2024-10-01 06:10:23.032345] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xfabac0) 00:17:57.677 [2024-10-01 06:10:23.032352] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:17:57.677 [2024-10-01 06:10:23.032379] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfe4dc0, cid 4, qid 0 00:17:57.677 [2024-10-01 06:10:23.032387] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfe4f40, cid 5, qid 0 00:17:57.677 [2024-10-01 06:10:23.032493] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:57.677 [2024-10-01 06:10:23.032511] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:57.677 [2024-10-01 06:10:23.032516] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:57.677 [2024-10-01 06:10:23.032521] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfe4dc0) on tqpair=0xfabac0 00:17:57.677 [2024-10-01 06:10:23.032528] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:57.677 [2024-10-01 06:10:23.032534] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:57.677 [2024-10-01 06:10:23.032538] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:57.677 [2024-10-01 06:10:23.032559] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfe4f40) on tqpair=0xfabac0 00:17:57.677 [2024-10-01 06:10:23.032570] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:57.677 [2024-10-01 06:10:23.032575] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xfabac0) 00:17:57.677 [2024-10-01 06:10:23.032583] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.677 [2024-10-01 06:10:23.032604] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfe4f40, cid 5, qid 0 00:17:57.677 [2024-10-01 06:10:23.032691] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:57.677 [2024-10-01 06:10:23.032698] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:57.677 [2024-10-01 06:10:23.032702] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:57.677 [2024-10-01 06:10:23.032706] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfe4f40) on tqpair=0xfabac0 00:17:57.677 [2024-10-01 06:10:23.032717] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:57.677 [2024-10-01 06:10:23.032722] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xfabac0) 00:17:57.677 [2024-10-01 06:10:23.032729] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.677 [2024-10-01 06:10:23.032747] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfe4f40, cid 5, qid 0 00:17:57.677 [2024-10-01 06:10:23.032858] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:57.677 [2024-10-01 06:10:23.032873] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:57.677 [2024-10-01 06:10:23.032877] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:57.677 [2024-10-01 06:10:23.032882] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfe4f40) on tqpair=0xfabac0 00:17:57.677 [2024-10-01 06:10:23.032893] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:57.677 [2024-10-01 06:10:23.036978] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xfabac0) 00:17:57.677 [2024-10-01 06:10:23.036992] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.677 [2024-10-01 06:10:23.037022] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfe4f40, cid 5, qid 0 00:17:57.677 [2024-10-01 06:10:23.037107] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:57.677 [2024-10-01 06:10:23.037115] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:57.677 [2024-10-01 06:10:23.037119] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:57.677 [2024-10-01 06:10:23.037124] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfe4f40) on tqpair=0xfabac0 00:17:57.677 [2024-10-01 06:10:23.037146] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:57.677 [2024-10-01 06:10:23.037152] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xfabac0) 00:17:57.677 [2024-10-01 06:10:23.037160] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.677 [2024-10-01 06:10:23.037168] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:57.677 [2024-10-01 06:10:23.037173] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xfabac0) 00:17:57.678 [2024-10-01 06:10:23.037180] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.678 [2024-10-01 06:10:23.037187] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:57.678 [2024-10-01 06:10:23.037191] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0xfabac0) 00:17:57.678 [2024-10-01 06:10:23.037198] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.678 [2024-10-01 06:10:23.037206] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:57.678 [2024-10-01 06:10:23.037211] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xfabac0) 00:17:57.678 [2024-10-01 06:10:23.037217] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.678 [2024-10-01 06:10:23.037239] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfe4f40, cid 5, qid 0 00:17:57.678 [2024-10-01 06:10:23.037263] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfe4dc0, cid 4, qid 0 00:17:57.678 [2024-10-01 06:10:23.037268] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfe50c0, cid 6, qid 0 00:17:57.678 [2024-10-01 06:10:23.037273] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfe5240, cid 7, qid 0 00:17:57.678 [2024-10-01 06:10:23.037484] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:57.678 [2024-10-01 06:10:23.037499] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:57.678 [2024-10-01 06:10:23.037504] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:57.678 [2024-10-01 06:10:23.037508] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xfabac0): datao=0, datal=8192, cccid=5 00:17:57.678 [2024-10-01 06:10:23.037513] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xfe4f40) on tqpair(0xfabac0): expected_datao=0, payload_size=8192 00:17:57.678 [2024-10-01 06:10:23.037518] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:57.678 [2024-10-01 06:10:23.037535] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:57.678 [2024-10-01 06:10:23.037540] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:57.678 [2024-10-01 06:10:23.037546] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:57.678 [2024-10-01 06:10:23.037552] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:57.678 [2024-10-01 06:10:23.037555] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:57.678 [2024-10-01 06:10:23.037559] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xfabac0): datao=0, datal=512, cccid=4 00:17:57.678 [2024-10-01 06:10:23.037564] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xfe4dc0) on tqpair(0xfabac0): expected_datao=0, payload_size=512 00:17:57.678 [2024-10-01 06:10:23.037568] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:57.678 [2024-10-01 06:10:23.037575] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:57.678 [2024-10-01 06:10:23.037578] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:57.678 [2024-10-01 06:10:23.037584] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:57.678 [2024-10-01 06:10:23.037590] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:57.678 [2024-10-01 06:10:23.037593] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:57.678 [2024-10-01 06:10:23.037597] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xfabac0): datao=0, datal=512, cccid=6 00:17:57.678 [2024-10-01 06:10:23.037601] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xfe50c0) on tqpair(0xfabac0): expected_datao=0, payload_size=512 00:17:57.678 [2024-10-01 06:10:23.037606] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:57.678 [2024-10-01 06:10:23.037612] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:57.678 [2024-10-01 06:10:23.037616] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:57.678 [2024-10-01 06:10:23.037621] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:17:57.678 [2024-10-01 06:10:23.037627] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:17:57.678 [2024-10-01 06:10:23.037630] nvme_tcp.c:1730:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:17:57.678 [2024-10-01 06:10:23.037634] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xfabac0): datao=0, datal=4096, cccid=7 00:17:57.678 [2024-10-01 06:10:23.037639] nvme_tcp.c:1742:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xfe5240) on tqpair(0xfabac0): expected_datao=0, payload_size=4096 00:17:57.678 [2024-10-01 06:10:23.037643] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:57.678 [2024-10-01 06:10:23.037650] nvme_tcp.c:1532:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:17:57.678 [2024-10-01 06:10:23.037654] nvme_tcp.c:1323:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:17:57.678 [2024-10-01 06:10:23.037661] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:57.678 [2024-10-01 06:10:23.037667] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:57.678 [2024-10-01 06:10:23.037671] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:57.678 [2024-10-01 06:10:23.037675] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfe4f40) on tqpair=0xfabac0 00:17:57.678 [2024-10-01 06:10:23.037690] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:57.678 [2024-10-01 06:10:23.037697] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:57.678 [2024-10-01 06:10:23.037701] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:57.678 [2024-10-01 06:10:23.037705] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfe4dc0) on tqpair=0xfabac0 00:17:57.678 [2024-10-01 06:10:23.037716] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:57.678 [2024-10-01 06:10:23.037722] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:57.678 [2024-10-01 06:10:23.037726] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:57.678 [2024-10-01 06:10:23.037730] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfe50c0) on tqpair=0xfabac0 00:17:57.678 [2024-10-01 06:10:23.037737] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:57.678 [2024-10-01 06:10:23.037743] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:57.678 ===================================================== 00:17:57.678 NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:17:57.678 ===================================================== 00:17:57.678 Controller Capabilities/Features 00:17:57.678 ================================ 00:17:57.678 Vendor ID: 8086 00:17:57.678 Subsystem Vendor ID: 8086 00:17:57.678 Serial Number: SPDK00000000000001 00:17:57.678 Model Number: SPDK bdev Controller 00:17:57.678 Firmware Version: 25.01 00:17:57.678 Recommended Arb Burst: 6 00:17:57.678 IEEE OUI Identifier: e4 d2 5c 00:17:57.678 Multi-path I/O 00:17:57.678 May have multiple subsystem ports: Yes 00:17:57.678 May have multiple controllers: Yes 00:17:57.678 Associated with SR-IOV VF: No 00:17:57.678 Max Data Transfer Size: 131072 00:17:57.678 Max Number of Namespaces: 32 00:17:57.678 Max Number of I/O Queues: 127 00:17:57.678 NVMe Specification Version (VS): 1.3 00:17:57.678 NVMe Specification Version (Identify): 1.3 00:17:57.678 Maximum Queue Entries: 128 00:17:57.678 Contiguous Queues Required: Yes 00:17:57.678 Arbitration Mechanisms Supported 00:17:57.678 Weighted Round Robin: Not Supported 00:17:57.678 Vendor Specific: Not Supported 00:17:57.678 Reset Timeout: 15000 ms 00:17:57.678 Doorbell Stride: 4 bytes 00:17:57.678 NVM Subsystem Reset: Not Supported 00:17:57.678 Command Sets Supported 00:17:57.678 NVM Command Set: Supported 00:17:57.678 Boot Partition: Not Supported 00:17:57.678 Memory Page Size Minimum: 4096 bytes 00:17:57.678 Memory Page Size Maximum: 4096 bytes 00:17:57.678 Persistent Memory Region: Not Supported 00:17:57.678 Optional Asynchronous Events Supported 00:17:57.678 Namespace Attribute Notices: Supported 00:17:57.678 Firmware Activation Notices: Not Supported 00:17:57.678 ANA Change Notices: Not Supported 00:17:57.678 PLE Aggregate Log Change Notices: Not Supported 00:17:57.678 LBA Status Info Alert Notices: Not Supported 00:17:57.678 EGE Aggregate Log Change Notices: Not Supported 00:17:57.678 Normal NVM Subsystem Shutdown event: Not Supported 00:17:57.678 Zone Descriptor Change Notices: Not Supported 00:17:57.678 Discovery Log Change Notices: Not Supported 00:17:57.678 Controller Attributes 00:17:57.678 128-bit Host Identifier: Supported 00:17:57.678 Non-Operational Permissive Mode: Not Supported 00:17:57.678 NVM Sets: Not Supported 00:17:57.678 Read Recovery Levels: Not Supported 00:17:57.678 Endurance Groups: Not Supported 00:17:57.678 Predictable Latency Mode: Not Supported 00:17:57.678 Traffic Based Keep ALive: Not Supported 00:17:57.678 Namespace Granularity: Not Supported 00:17:57.678 SQ Associations: Not Supported 00:17:57.678 UUID List: Not Supported 00:17:57.678 Multi-Domain Subsystem: Not Supported 00:17:57.678 Fixed Capacity Management: Not Supported 00:17:57.678 Variable Capacity Management: Not Supported 00:17:57.678 Delete Endurance Group: Not Supported 00:17:57.678 Delete NVM Set: Not Supported 00:17:57.678 Extended LBA Formats Supported: Not Supported 00:17:57.678 Flexible Data Placement Supported: Not Supported 00:17:57.678 00:17:57.678 Controller Memory Buffer Support 00:17:57.678 ================================ 00:17:57.678 Supported: No 00:17:57.678 00:17:57.678 Persistent Memory Region Support 00:17:57.678 ================================ 00:17:57.678 Supported: No 00:17:57.678 00:17:57.678 Admin Command Set Attributes 00:17:57.678 ============================ 00:17:57.678 Security Send/Receive: Not Supported 00:17:57.678 Format NVM: Not Supported 00:17:57.678 Firmware Activate/Download: Not Supported 00:17:57.678 Namespace Management: Not Supported 00:17:57.678 Device Self-Test: Not Supported 00:17:57.678 Directives: Not Supported 00:17:57.678 NVMe-MI: Not Supported 00:17:57.678 Virtualization Management: Not Supported 00:17:57.678 Doorbell Buffer Config: Not Supported 00:17:57.678 Get LBA Status Capability: Not Supported 00:17:57.678 Command & Feature Lockdown Capability: Not Supported 00:17:57.678 Abort Command Limit: 4 00:17:57.678 Async Event Request Limit: 4 00:17:57.678 Number of Firmware Slots: N/A 00:17:57.678 Firmware Slot 1 Read-Only: N/A 00:17:57.678 Firmware Activation Without Reset: N/A 00:17:57.678 Multiple Update Detection Support: N/A 00:17:57.679 Firmware Update Granularity: No Information Provided 00:17:57.679 Per-Namespace SMART Log: No 00:17:57.679 Asymmetric Namespace Access Log Page: Not Supported 00:17:57.679 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:17:57.679 Command Effects Log Page: Supported 00:17:57.679 Get Log Page Extended Data: Supported 00:17:57.679 Telemetry Log Pages: Not Supported 00:17:57.679 Persistent Event Log Pages: Not Supported 00:17:57.679 Supported Log Pages Log Page: May Support 00:17:57.679 Commands Supported & Effects Log Page: Not Supported 00:17:57.679 Feature Identifiers & Effects Log Page:May Support 00:17:57.679 NVMe-MI Commands & Effects Log Page: May Support 00:17:57.679 Data Area 4 for Telemetry Log: Not Supported 00:17:57.679 Error Log Page Entries Supported: 128 00:17:57.679 Keep Alive: Supported 00:17:57.679 Keep Alive Granularity: 10000 ms 00:17:57.679 00:17:57.679 NVM Command Set Attributes 00:17:57.679 ========================== 00:17:57.679 Submission Queue Entry Size 00:17:57.679 Max: 64 00:17:57.679 Min: 64 00:17:57.679 Completion Queue Entry Size 00:17:57.679 Max: 16 00:17:57.679 Min: 16 00:17:57.679 Number of Namespaces: 32 00:17:57.679 Compare Command: Supported 00:17:57.679 Write Uncorrectable Command: Not Supported 00:17:57.679 Dataset Management Command: Supported 00:17:57.679 Write Zeroes Command: Supported 00:17:57.679 Set Features Save Field: Not Supported 00:17:57.679 Reservations: Supported 00:17:57.679 Timestamp: Not Supported 00:17:57.679 Copy: Supported 00:17:57.679 Volatile Write Cache: Present 00:17:57.679 Atomic Write Unit (Normal): 1 00:17:57.679 Atomic Write Unit (PFail): 1 00:17:57.679 Atomic Compare & Write Unit: 1 00:17:57.679 Fused Compare & Write: Supported 00:17:57.679 Scatter-Gather List 00:17:57.679 SGL Command Set: Supported 00:17:57.679 SGL Keyed: Supported 00:17:57.679 SGL Bit Bucket Descriptor: Not Supported 00:17:57.679 SGL Metadata Pointer: Not Supported 00:17:57.679 Oversized SGL: Not Supported 00:17:57.679 SGL Metadata Address: Not Supported 00:17:57.679 SGL Offset: Supported 00:17:57.679 Transport SGL Data Block: Not Supported 00:17:57.679 Replay Protected Memory Block: Not Supported 00:17:57.679 00:17:57.679 Firmware Slot Information 00:17:57.679 ========================= 00:17:57.679 Active slot: 1 00:17:57.679 Slot 1 Firmware Revision: 25.01 00:17:57.679 00:17:57.679 00:17:57.679 Commands Supported and Effects 00:17:57.679 ============================== 00:17:57.679 Admin Commands 00:17:57.679 -------------- 00:17:57.679 Get Log Page (02h): Supported 00:17:57.679 Identify (06h): Supported 00:17:57.679 Abort (08h): Supported 00:17:57.679 Set Features (09h): Supported 00:17:57.679 Get Features (0Ah): Supported 00:17:57.679 Asynchronous Event Request (0Ch): Supported 00:17:57.679 Keep Alive (18h): Supported 00:17:57.679 I/O Commands 00:17:57.679 ------------ 00:17:57.679 Flush (00h): Supported LBA-Change 00:17:57.679 Write (01h): Supported LBA-Change 00:17:57.679 Read (02h): Supported 00:17:57.679 Compare (05h): Supported 00:17:57.679 Write Zeroes (08h): Supported LBA-Change 00:17:57.679 Dataset Management (09h): Supported LBA-Change 00:17:57.679 Copy (19h): Supported LBA-Change 00:17:57.679 00:17:57.679 Error Log 00:17:57.679 ========= 00:17:57.679 00:17:57.679 Arbitration 00:17:57.679 =========== 00:17:57.679 Arbitration Burst: 1 00:17:57.679 00:17:57.679 Power Management 00:17:57.679 ================ 00:17:57.679 Number of Power States: 1 00:17:57.679 Current Power State: Power State #0 00:17:57.679 Power State #0: 00:17:57.679 Max Power: 0.00 W 00:17:57.679 Non-Operational State: Operational 00:17:57.679 Entry Latency: Not Reported 00:17:57.679 Exit Latency: Not Reported 00:17:57.679 Relative Read Throughput: 0 00:17:57.679 Relative Read Latency: 0 00:17:57.679 Relative Write Throughput: 0 00:17:57.679 Relative Write Latency: 0 00:17:57.679 Idle Power: Not Reported 00:17:57.679 Active Power: Not Reported 00:17:57.679 Non-Operational Permissive Mode: Not Supported 00:17:57.679 00:17:57.679 Health Information 00:17:57.679 ================== 00:17:57.679 Critical Warnings: 00:17:57.679 Available Spare Space: OK 00:17:57.679 Temperature: OK 00:17:57.679 Device Reliability: OK 00:17:57.679 Read Only: No 00:17:57.679 Volatile Memory Backup: OK 00:17:57.679 Current Temperature: 0 Kelvin (-273 Celsius) 00:17:57.679 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:17:57.679 Available Spare: 0% 00:17:57.679 Available Spare Threshold: 0% 00:17:57.679 Life Percentage Used:[2024-10-01 06:10:23.037747] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:57.679 [2024-10-01 06:10:23.037751] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfe5240) on tqpair=0xfabac0 00:17:57.679 [2024-10-01 06:10:23.037850] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:57.679 [2024-10-01 06:10:23.037857] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xfabac0) 00:17:57.679 [2024-10-01 06:10:23.037865] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.679 [2024-10-01 06:10:23.037888] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfe5240, cid 7, qid 0 00:17:57.679 [2024-10-01 06:10:23.038012] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:57.679 [2024-10-01 06:10:23.038021] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:57.679 [2024-10-01 06:10:23.038025] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:57.679 [2024-10-01 06:10:23.038042] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfe5240) on tqpair=0xfabac0 00:17:57.679 [2024-10-01 06:10:23.038086] nvme_ctrlr.c:4386:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:17:57.679 [2024-10-01 06:10:23.038099] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfe47c0) on tqpair=0xfabac0 00:17:57.679 [2024-10-01 06:10:23.038106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.679 [2024-10-01 06:10:23.038112] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfe4940) on tqpair=0xfabac0 00:17:57.679 [2024-10-01 06:10:23.038117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.679 [2024-10-01 06:10:23.038123] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfe4ac0) on tqpair=0xfabac0 00:17:57.679 [2024-10-01 06:10:23.038128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.679 [2024-10-01 06:10:23.038133] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfe4c40) on tqpair=0xfabac0 00:17:57.679 [2024-10-01 06:10:23.038139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:57.679 [2024-10-01 06:10:23.038149] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:57.679 [2024-10-01 06:10:23.038154] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:57.679 [2024-10-01 06:10:23.038158] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfabac0) 00:17:57.679 [2024-10-01 06:10:23.038166] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.679 [2024-10-01 06:10:23.038194] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfe4c40, cid 3, qid 0 00:17:57.679 [2024-10-01 06:10:23.038286] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:57.679 [2024-10-01 06:10:23.038294] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:57.679 [2024-10-01 06:10:23.038298] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:57.679 [2024-10-01 06:10:23.038302] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfe4c40) on tqpair=0xfabac0 00:17:57.679 [2024-10-01 06:10:23.038310] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:57.679 [2024-10-01 06:10:23.038315] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:57.679 [2024-10-01 06:10:23.038319] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfabac0) 00:17:57.679 [2024-10-01 06:10:23.038326] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.679 [2024-10-01 06:10:23.038363] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfe4c40, cid 3, qid 0 00:17:57.680 [2024-10-01 06:10:23.038455] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:57.680 [2024-10-01 06:10:23.038462] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:57.680 [2024-10-01 06:10:23.038466] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:57.680 [2024-10-01 06:10:23.038470] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfe4c40) on tqpair=0xfabac0 00:17:57.680 [2024-10-01 06:10:23.038475] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:17:57.680 [2024-10-01 06:10:23.038479] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:17:57.680 [2024-10-01 06:10:23.038489] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:57.680 [2024-10-01 06:10:23.038494] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:57.680 [2024-10-01 06:10:23.038498] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfabac0) 00:17:57.680 [2024-10-01 06:10:23.038505] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.680 [2024-10-01 06:10:23.038522] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfe4c40, cid 3, qid 0 00:17:57.680 [2024-10-01 06:10:23.038596] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:57.680 [2024-10-01 06:10:23.038603] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:57.680 [2024-10-01 06:10:23.038607] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:57.680 [2024-10-01 06:10:23.038611] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfe4c40) on tqpair=0xfabac0 00:17:57.680 [2024-10-01 06:10:23.038621] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:57.680 [2024-10-01 06:10:23.038626] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:57.680 [2024-10-01 06:10:23.038630] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfabac0) 00:17:57.680 [2024-10-01 06:10:23.038637] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.680 [2024-10-01 06:10:23.038654] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfe4c40, cid 3, qid 0 00:17:57.680 [2024-10-01 06:10:23.038731] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:57.680 [2024-10-01 06:10:23.038738] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:57.680 [2024-10-01 06:10:23.038742] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:57.680 [2024-10-01 06:10:23.038746] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfe4c40) on tqpair=0xfabac0 00:17:57.680 [2024-10-01 06:10:23.038756] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:57.680 [2024-10-01 06:10:23.038761] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:57.680 [2024-10-01 06:10:23.038765] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfabac0) 00:17:57.680 [2024-10-01 06:10:23.038772] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.680 [2024-10-01 06:10:23.038789] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfe4c40, cid 3, qid 0 00:17:57.680 [2024-10-01 06:10:23.038860] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:57.680 [2024-10-01 06:10:23.038873] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:57.680 [2024-10-01 06:10:23.038877] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:57.680 [2024-10-01 06:10:23.038882] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfe4c40) on tqpair=0xfabac0 00:17:57.680 [2024-10-01 06:10:23.038892] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:57.680 [2024-10-01 06:10:23.038924] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:57.680 [2024-10-01 06:10:23.038929] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfabac0) 00:17:57.680 [2024-10-01 06:10:23.038952] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.680 [2024-10-01 06:10:23.038973] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfe4c40, cid 3, qid 0 00:17:57.680 [2024-10-01 06:10:23.039059] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:57.680 [2024-10-01 06:10:23.039067] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:57.680 [2024-10-01 06:10:23.039071] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:57.680 [2024-10-01 06:10:23.039075] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfe4c40) on tqpair=0xfabac0 00:17:57.680 [2024-10-01 06:10:23.039086] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:57.680 [2024-10-01 06:10:23.039091] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:57.680 [2024-10-01 06:10:23.039095] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfabac0) 00:17:57.680 [2024-10-01 06:10:23.039103] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.680 [2024-10-01 06:10:23.039121] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfe4c40, cid 3, qid 0 00:17:57.680 [2024-10-01 06:10:23.039199] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:57.680 [2024-10-01 06:10:23.039207] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:57.680 [2024-10-01 06:10:23.039211] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:57.680 [2024-10-01 06:10:23.039215] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfe4c40) on tqpair=0xfabac0 00:17:57.680 [2024-10-01 06:10:23.039226] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:57.680 [2024-10-01 06:10:23.039231] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:57.680 [2024-10-01 06:10:23.039235] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfabac0) 00:17:57.680 [2024-10-01 06:10:23.039242] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.680 [2024-10-01 06:10:23.039275] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfe4c40, cid 3, qid 0 00:17:57.680 [2024-10-01 06:10:23.039358] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:57.680 [2024-10-01 06:10:23.039365] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:57.680 [2024-10-01 06:10:23.039369] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:57.680 [2024-10-01 06:10:23.039373] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfe4c40) on tqpair=0xfabac0 00:17:57.680 [2024-10-01 06:10:23.039383] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:57.680 [2024-10-01 06:10:23.039388] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:57.680 [2024-10-01 06:10:23.039392] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfabac0) 00:17:57.680 [2024-10-01 06:10:23.039399] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.680 [2024-10-01 06:10:23.039416] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfe4c40, cid 3, qid 0 00:17:57.680 [2024-10-01 06:10:23.039491] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:57.680 [2024-10-01 06:10:23.039498] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:57.680 [2024-10-01 06:10:23.039501] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:57.680 [2024-10-01 06:10:23.039506] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfe4c40) on tqpair=0xfabac0 00:17:57.680 [2024-10-01 06:10:23.039516] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:57.680 [2024-10-01 06:10:23.039520] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:57.680 [2024-10-01 06:10:23.039524] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfabac0) 00:17:57.680 [2024-10-01 06:10:23.039531] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.680 [2024-10-01 06:10:23.039548] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfe4c40, cid 3, qid 0 00:17:57.680 [2024-10-01 06:10:23.039613] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:57.680 [2024-10-01 06:10:23.039620] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:57.680 [2024-10-01 06:10:23.039624] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:57.680 [2024-10-01 06:10:23.039628] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfe4c40) on tqpair=0xfabac0 00:17:57.680 [2024-10-01 06:10:23.039638] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:57.680 [2024-10-01 06:10:23.039643] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:57.680 [2024-10-01 06:10:23.039647] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfabac0) 00:17:57.680 [2024-10-01 06:10:23.039654] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.680 [2024-10-01 06:10:23.039671] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfe4c40, cid 3, qid 0 00:17:57.680 [2024-10-01 06:10:23.039771] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:57.680 [2024-10-01 06:10:23.039786] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:57.680 [2024-10-01 06:10:23.039791] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:57.680 [2024-10-01 06:10:23.039795] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfe4c40) on tqpair=0xfabac0 00:17:57.680 [2024-10-01 06:10:23.039806] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:57.680 [2024-10-01 06:10:23.039811] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:57.680 [2024-10-01 06:10:23.039815] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfabac0) 00:17:57.680 [2024-10-01 06:10:23.039823] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.680 [2024-10-01 06:10:23.039842] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfe4c40, cid 3, qid 0 00:17:57.680 [2024-10-01 06:10:23.039936] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:57.680 [2024-10-01 06:10:23.039945] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:57.680 [2024-10-01 06:10:23.039949] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:57.680 [2024-10-01 06:10:23.039953] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfe4c40) on tqpair=0xfabac0 00:17:57.680 [2024-10-01 06:10:23.039964] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:57.680 [2024-10-01 06:10:23.039969] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:57.680 [2024-10-01 06:10:23.039973] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfabac0) 00:17:57.680 [2024-10-01 06:10:23.039981] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.680 [2024-10-01 06:10:23.040001] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfe4c40, cid 3, qid 0 00:17:57.680 [2024-10-01 06:10:23.040093] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:57.680 [2024-10-01 06:10:23.040100] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:57.680 [2024-10-01 06:10:23.040104] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:57.680 [2024-10-01 06:10:23.040108] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfe4c40) on tqpair=0xfabac0 00:17:57.680 [2024-10-01 06:10:23.040121] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:57.680 [2024-10-01 06:10:23.040125] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:57.680 [2024-10-01 06:10:23.040129] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfabac0) 00:17:57.681 [2024-10-01 06:10:23.040137] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.681 [2024-10-01 06:10:23.040154] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfe4c40, cid 3, qid 0 00:17:57.681 [2024-10-01 06:10:23.040225] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:57.681 [2024-10-01 06:10:23.040232] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:57.681 [2024-10-01 06:10:23.040236] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:57.681 [2024-10-01 06:10:23.040240] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfe4c40) on tqpair=0xfabac0 00:17:57.681 [2024-10-01 06:10:23.040251] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:57.681 [2024-10-01 06:10:23.040255] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:57.681 [2024-10-01 06:10:23.040260] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfabac0) 00:17:57.681 [2024-10-01 06:10:23.040267] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.681 [2024-10-01 06:10:23.040285] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfe4c40, cid 3, qid 0 00:17:57.681 [2024-10-01 06:10:23.040356] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:57.681 [2024-10-01 06:10:23.040363] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:57.681 [2024-10-01 06:10:23.040367] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:57.681 [2024-10-01 06:10:23.040371] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfe4c40) on tqpair=0xfabac0 00:17:57.681 [2024-10-01 06:10:23.040382] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:57.681 [2024-10-01 06:10:23.040387] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:57.681 [2024-10-01 06:10:23.040406] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfabac0) 00:17:57.681 [2024-10-01 06:10:23.040413] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.681 [2024-10-01 06:10:23.040430] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfe4c40, cid 3, qid 0 00:17:57.681 [2024-10-01 06:10:23.040521] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:57.681 [2024-10-01 06:10:23.040537] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:57.681 [2024-10-01 06:10:23.040542] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:57.681 [2024-10-01 06:10:23.040547] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfe4c40) on tqpair=0xfabac0 00:17:57.681 [2024-10-01 06:10:23.040558] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:57.681 [2024-10-01 06:10:23.040563] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:57.681 [2024-10-01 06:10:23.040567] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfabac0) 00:17:57.681 [2024-10-01 06:10:23.040574] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.681 [2024-10-01 06:10:23.040593] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfe4c40, cid 3, qid 0 00:17:57.681 [2024-10-01 06:10:23.040663] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:57.681 [2024-10-01 06:10:23.040670] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:57.681 [2024-10-01 06:10:23.040675] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:57.681 [2024-10-01 06:10:23.040679] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfe4c40) on tqpair=0xfabac0 00:17:57.681 [2024-10-01 06:10:23.040689] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:57.681 [2024-10-01 06:10:23.040694] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:57.681 [2024-10-01 06:10:23.040698] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfabac0) 00:17:57.681 [2024-10-01 06:10:23.040705] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.681 [2024-10-01 06:10:23.040738] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfe4c40, cid 3, qid 0 00:17:57.681 [2024-10-01 06:10:23.040814] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:57.681 [2024-10-01 06:10:23.040821] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:57.681 [2024-10-01 06:10:23.040825] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:57.681 [2024-10-01 06:10:23.040829] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfe4c40) on tqpair=0xfabac0 00:17:57.681 [2024-10-01 06:10:23.040840] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:57.681 [2024-10-01 06:10:23.040845] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:57.681 [2024-10-01 06:10:23.040849] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfabac0) 00:17:57.681 [2024-10-01 06:10:23.040857] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.681 [2024-10-01 06:10:23.040875] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfe4c40, cid 3, qid 0 00:17:57.681 [2024-10-01 06:10:23.044934] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:57.681 [2024-10-01 06:10:23.044955] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:57.681 [2024-10-01 06:10:23.044960] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:57.681 [2024-10-01 06:10:23.044965] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfe4c40) on tqpair=0xfabac0 00:17:57.681 [2024-10-01 06:10:23.044978] nvme_tcp.c: 800:nvme_tcp_build_contig_request: *DEBUG*: enter 00:17:57.681 [2024-10-01 06:10:23.044983] nvme_tcp.c: 977:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:17:57.681 [2024-10-01 06:10:23.044987] nvme_tcp.c: 986:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xfabac0) 00:17:57.681 [2024-10-01 06:10:23.044995] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:57.681 [2024-10-01 06:10:23.045020] nvme_tcp.c: 951:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfe4c40, cid 3, qid 0 00:17:57.681 [2024-10-01 06:10:23.045116] nvme_tcp.c:1198:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:17:57.681 [2024-10-01 06:10:23.045123] nvme_tcp.c:1986:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:17:57.681 [2024-10-01 06:10:23.045127] nvme_tcp.c:1659:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:17:57.681 [2024-10-01 06:10:23.045132] nvme_tcp.c:1079:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfe4c40) on tqpair=0xfabac0 00:17:57.681 [2024-10-01 06:10:23.045140] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 6 milliseconds 00:17:57.681 0% 00:17:57.681 Data Units Read: 0 00:17:57.681 Data Units Written: 0 00:17:57.681 Host Read Commands: 0 00:17:57.681 Host Write Commands: 0 00:17:57.681 Controller Busy Time: 0 minutes 00:17:57.681 Power Cycles: 0 00:17:57.681 Power On Hours: 0 hours 00:17:57.681 Unsafe Shutdowns: 0 00:17:57.681 Unrecoverable Media Errors: 0 00:17:57.681 Lifetime Error Log Entries: 0 00:17:57.681 Warning Temperature Time: 0 minutes 00:17:57.681 Critical Temperature Time: 0 minutes 00:17:57.681 00:17:57.681 Number of Queues 00:17:57.681 ================ 00:17:57.681 Number of I/O Submission Queues: 127 00:17:57.681 Number of I/O Completion Queues: 127 00:17:57.681 00:17:57.681 Active Namespaces 00:17:57.681 ================= 00:17:57.681 Namespace ID:1 00:17:57.681 Error Recovery Timeout: Unlimited 00:17:57.681 Command Set Identifier: NVM (00h) 00:17:57.681 Deallocate: Supported 00:17:57.681 Deallocated/Unwritten Error: Not Supported 00:17:57.681 Deallocated Read Value: Unknown 00:17:57.681 Deallocate in Write Zeroes: Not Supported 00:17:57.681 Deallocated Guard Field: 0xFFFF 00:17:57.681 Flush: Supported 00:17:57.681 Reservation: Supported 00:17:57.681 Namespace Sharing Capabilities: Multiple Controllers 00:17:57.681 Size (in LBAs): 131072 (0GiB) 00:17:57.681 Capacity (in LBAs): 131072 (0GiB) 00:17:57.681 Utilization (in LBAs): 131072 (0GiB) 00:17:57.681 NGUID: ABCDEF0123456789ABCDEF0123456789 00:17:57.681 EUI64: ABCDEF0123456789 00:17:57.681 UUID: 49079c09-2a13-4da1-83bf-ce60ebb856d8 00:17:57.681 Thin Provisioning: Not Supported 00:17:57.681 Per-NS Atomic Units: Yes 00:17:57.681 Atomic Boundary Size (Normal): 0 00:17:57.681 Atomic Boundary Size (PFail): 0 00:17:57.681 Atomic Boundary Offset: 0 00:17:57.681 Maximum Single Source Range Length: 65535 00:17:57.681 Maximum Copy Length: 65535 00:17:57.681 Maximum Source Range Count: 1 00:17:57.681 NGUID/EUI64 Never Reused: No 00:17:57.681 Namespace Write Protected: No 00:17:57.681 Number of LBA Formats: 1 00:17:57.681 Current LBA Format: LBA Format #00 00:17:57.681 LBA Format #00: Data Size: 512 Metadata Size: 0 00:17:57.681 00:17:57.681 06:10:23 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:17:57.681 06:10:23 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:57.681 06:10:23 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.681 06:10:23 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:57.681 06:10:23 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.681 06:10:23 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:17:57.681 06:10:23 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:17:57.681 06:10:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@512 -- # nvmfcleanup 00:17:57.681 06:10:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:17:57.681 06:10:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:57.681 06:10:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:17:57.681 06:10:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:57.681 06:10:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:57.681 rmmod nvme_tcp 00:17:57.681 rmmod nvme_fabrics 00:17:57.681 rmmod nvme_keyring 00:17:57.681 06:10:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:57.681 06:10:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:17:57.681 06:10:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:17:57.681 06:10:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@513 -- # '[' -n 87823 ']' 00:17:57.681 06:10:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@514 -- # killprocess 87823 00:17:57.681 06:10:23 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@950 -- # '[' -z 87823 ']' 00:17:57.681 06:10:23 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # kill -0 87823 00:17:57.681 06:10:23 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # uname 00:17:57.681 06:10:23 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:57.681 06:10:23 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 87823 00:17:57.682 killing process with pid 87823 00:17:57.682 06:10:23 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:57.682 06:10:23 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:57.682 06:10:23 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@968 -- # echo 'killing process with pid 87823' 00:17:57.682 06:10:23 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@969 -- # kill 87823 00:17:57.682 06:10:23 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@974 -- # wait 87823 00:17:57.941 06:10:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:17:57.941 06:10:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:17:57.941 06:10:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:17:57.941 06:10:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:17:57.941 06:10:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@787 -- # iptables-save 00:17:57.941 06:10:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:17:57.941 06:10:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@787 -- # iptables-restore 00:17:57.941 06:10:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:57.941 06:10:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:17:57.941 06:10:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:17:57.941 06:10:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:17:57.941 06:10:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:17:57.941 06:10:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:17:57.941 06:10:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:17:57.941 06:10:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:17:57.941 06:10:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:17:57.941 06:10:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:17:57.941 06:10:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:17:57.941 06:10:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:17:57.942 06:10:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:17:57.942 06:10:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:58.200 06:10:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:58.200 06:10:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@246 -- # remove_spdk_ns 00:17:58.200 06:10:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:58.200 06:10:23 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:58.200 06:10:23 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:58.200 06:10:23 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@300 -- # return 0 00:17:58.200 00:17:58.200 real 0m2.090s 00:17:58.200 user 0m4.040s 00:17:58.200 sys 0m0.687s 00:17:58.200 06:10:23 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:58.200 06:10:23 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:17:58.200 ************************************ 00:17:58.200 END TEST nvmf_identify 00:17:58.200 ************************************ 00:17:58.200 06:10:23 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:17:58.200 06:10:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:17:58.200 06:10:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:58.200 06:10:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:17:58.200 ************************************ 00:17:58.200 START TEST nvmf_perf 00:17:58.200 ************************************ 00:17:58.200 06:10:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:17:58.200 * Looking for test storage... 00:17:58.200 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:58.200 06:10:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:17:58.200 06:10:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1681 -- # lcov --version 00:17:58.200 06:10:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:17:58.460 06:10:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:17:58.460 06:10:23 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:58.460 06:10:23 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:58.460 06:10:23 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:58.460 06:10:23 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:17:58.460 06:10:23 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:17:58.460 06:10:23 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:17:58.460 06:10:23 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:17:58.460 06:10:23 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:17:58.460 06:10:23 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:17:58.460 06:10:23 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:17:58.460 06:10:23 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:58.460 06:10:23 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:17:58.460 06:10:23 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:17:58.460 06:10:23 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:58.460 06:10:23 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:58.460 06:10:23 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:17:58.460 06:10:23 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:17:58.460 06:10:23 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:58.460 06:10:23 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:17:58.460 06:10:23 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:17:58.460 06:10:23 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:17:58.460 06:10:23 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:17:58.460 06:10:23 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:58.460 06:10:23 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:17:58.460 06:10:23 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:17:58.460 06:10:23 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:58.460 06:10:23 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:58.460 06:10:23 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:17:58.461 06:10:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:58.461 06:10:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:17:58.461 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:58.461 --rc genhtml_branch_coverage=1 00:17:58.461 --rc genhtml_function_coverage=1 00:17:58.461 --rc genhtml_legend=1 00:17:58.461 --rc geninfo_all_blocks=1 00:17:58.461 --rc geninfo_unexecuted_blocks=1 00:17:58.461 00:17:58.461 ' 00:17:58.461 06:10:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:17:58.461 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:58.461 --rc genhtml_branch_coverage=1 00:17:58.461 --rc genhtml_function_coverage=1 00:17:58.461 --rc genhtml_legend=1 00:17:58.461 --rc geninfo_all_blocks=1 00:17:58.461 --rc geninfo_unexecuted_blocks=1 00:17:58.461 00:17:58.461 ' 00:17:58.461 06:10:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:17:58.461 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:58.461 --rc genhtml_branch_coverage=1 00:17:58.461 --rc genhtml_function_coverage=1 00:17:58.461 --rc genhtml_legend=1 00:17:58.461 --rc geninfo_all_blocks=1 00:17:58.461 --rc geninfo_unexecuted_blocks=1 00:17:58.461 00:17:58.461 ' 00:17:58.461 06:10:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:17:58.461 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:58.461 --rc genhtml_branch_coverage=1 00:17:58.461 --rc genhtml_function_coverage=1 00:17:58.461 --rc genhtml_legend=1 00:17:58.461 --rc geninfo_all_blocks=1 00:17:58.461 --rc geninfo_unexecuted_blocks=1 00:17:58.461 00:17:58.461 ' 00:17:58.461 06:10:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:58.461 06:10:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:17:58.461 06:10:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:58.461 06:10:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:58.461 06:10:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:58.461 06:10:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:58.461 06:10:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:58.461 06:10:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:58.461 06:10:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:58.461 06:10:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:58.461 06:10:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:58.461 06:10:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:58.461 06:10:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 00:17:58.461 06:10:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=a979a798-a221-4879-b3c4-5aaa753fde06 00:17:58.461 06:10:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:58.461 06:10:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:58.461 06:10:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:58.461 06:10:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:58.461 06:10:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:58.461 06:10:23 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:17:58.461 06:10:23 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:58.461 06:10:23 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:58.461 06:10:23 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:58.461 06:10:23 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:58.461 06:10:23 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:58.461 06:10:23 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:58.461 06:10:23 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:17:58.461 06:10:23 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:58.461 06:10:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:17:58.461 06:10:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:58.461 06:10:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:58.461 06:10:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:58.461 06:10:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:58.461 06:10:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:58.461 06:10:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:58.461 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:58.461 06:10:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:58.461 06:10:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:58.461 06:10:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:58.461 06:10:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:17:58.461 06:10:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:17:58.461 06:10:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:58.461 06:10:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:17:58.461 06:10:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:17:58.461 06:10:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:58.461 06:10:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@472 -- # prepare_net_devs 00:17:58.461 06:10:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@434 -- # local -g is_hw=no 00:17:58.461 06:10:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@436 -- # remove_spdk_ns 00:17:58.461 06:10:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:58.461 06:10:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:58.461 06:10:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:58.461 06:10:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:17:58.461 06:10:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:17:58.461 06:10:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:17:58.461 06:10:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:17:58.461 06:10:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:17:58.461 06:10:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@456 -- # nvmf_veth_init 00:17:58.461 06:10:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:58.461 06:10:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:17:58.461 06:10:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:17:58.461 06:10:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:17:58.461 06:10:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:58.461 06:10:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:17:58.461 06:10:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:58.461 06:10:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:17:58.461 06:10:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:58.461 06:10:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:17:58.461 06:10:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:58.461 06:10:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:58.461 06:10:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:58.461 06:10:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:58.461 06:10:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:58.461 06:10:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:58.461 06:10:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:17:58.461 Cannot find device "nvmf_init_br" 00:17:58.461 06:10:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@162 -- # true 00:17:58.461 06:10:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:17:58.461 Cannot find device "nvmf_init_br2" 00:17:58.461 06:10:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@163 -- # true 00:17:58.461 06:10:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:17:58.461 Cannot find device "nvmf_tgt_br" 00:17:58.461 06:10:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@164 -- # true 00:17:58.462 06:10:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:17:58.462 Cannot find device "nvmf_tgt_br2" 00:17:58.462 06:10:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@165 -- # true 00:17:58.462 06:10:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:17:58.462 Cannot find device "nvmf_init_br" 00:17:58.462 06:10:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@166 -- # true 00:17:58.462 06:10:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:17:58.462 Cannot find device "nvmf_init_br2" 00:17:58.462 06:10:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@167 -- # true 00:17:58.462 06:10:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:17:58.462 Cannot find device "nvmf_tgt_br" 00:17:58.462 06:10:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@168 -- # true 00:17:58.462 06:10:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:17:58.462 Cannot find device "nvmf_tgt_br2" 00:17:58.462 06:10:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@169 -- # true 00:17:58.462 06:10:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:17:58.462 Cannot find device "nvmf_br" 00:17:58.462 06:10:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@170 -- # true 00:17:58.462 06:10:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:17:58.462 Cannot find device "nvmf_init_if" 00:17:58.462 06:10:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@171 -- # true 00:17:58.462 06:10:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:17:58.462 Cannot find device "nvmf_init_if2" 00:17:58.462 06:10:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@172 -- # true 00:17:58.462 06:10:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:58.462 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:58.462 06:10:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@173 -- # true 00:17:58.462 06:10:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:58.462 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:58.462 06:10:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@174 -- # true 00:17:58.462 06:10:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:17:58.462 06:10:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:58.462 06:10:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:17:58.462 06:10:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:58.721 06:10:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:58.721 06:10:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:58.721 06:10:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:58.721 06:10:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:58.721 06:10:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:17:58.721 06:10:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:17:58.721 06:10:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:17:58.721 06:10:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:17:58.721 06:10:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:17:58.721 06:10:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:17:58.721 06:10:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:17:58.721 06:10:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:17:58.721 06:10:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:17:58.721 06:10:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:58.721 06:10:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:58.721 06:10:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:58.721 06:10:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:17:58.722 06:10:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:17:58.722 06:10:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:17:58.722 06:10:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:17:58.722 06:10:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:58.722 06:10:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:58.722 06:10:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:58.722 06:10:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:17:58.722 06:10:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:17:58.722 06:10:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:17:58.722 06:10:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:58.722 06:10:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:17:58.722 06:10:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:17:58.722 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:58.722 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.055 ms 00:17:58.722 00:17:58.722 --- 10.0.0.3 ping statistics --- 00:17:58.722 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:58.722 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:17:58.722 06:10:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:17:58.722 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:17:58.722 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.059 ms 00:17:58.722 00:17:58.722 --- 10.0.0.4 ping statistics --- 00:17:58.722 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:58.722 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:17:58.722 06:10:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:58.722 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:58.722 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:17:58.722 00:17:58.722 --- 10.0.0.1 ping statistics --- 00:17:58.722 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:58.722 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:17:58.722 06:10:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:17:58.722 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:58.722 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.109 ms 00:17:58.722 00:17:58.722 --- 10.0.0.2 ping statistics --- 00:17:58.722 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:58.722 rtt min/avg/max/mdev = 0.109/0.109/0.109/0.000 ms 00:17:58.722 06:10:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:58.722 06:10:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@457 -- # return 0 00:17:58.722 06:10:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:17:58.722 06:10:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:58.722 06:10:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:17:58.722 06:10:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:17:58.722 06:10:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:58.722 06:10:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:17:58.722 06:10:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:17:58.722 06:10:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:17:58.722 06:10:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:17:58.722 06:10:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:58.722 06:10:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:17:58.722 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:58.722 06:10:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@505 -- # nvmfpid=88071 00:17:58.722 06:10:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:58.722 06:10:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@506 -- # waitforlisten 88071 00:17:58.722 06:10:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@831 -- # '[' -z 88071 ']' 00:17:58.722 06:10:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:58.722 06:10:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:58.722 06:10:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:58.722 06:10:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:58.722 06:10:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:17:58.983 [2024-10-01 06:10:24.355534] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:17:58.983 [2024-10-01 06:10:24.355610] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:58.983 [2024-10-01 06:10:24.489816] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:58.983 [2024-10-01 06:10:24.523511] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:58.983 [2024-10-01 06:10:24.523801] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:58.983 [2024-10-01 06:10:24.523954] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:58.983 [2024-10-01 06:10:24.524102] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:58.983 [2024-10-01 06:10:24.524149] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:58.983 [2024-10-01 06:10:24.524378] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:17:58.983 [2024-10-01 06:10:24.524419] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:17:58.983 [2024-10-01 06:10:24.524546] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:17:58.983 [2024-10-01 06:10:24.524550] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:58.983 [2024-10-01 06:10:24.553749] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:59.243 06:10:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:59.243 06:10:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # return 0 00:17:59.243 06:10:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:17:59.243 06:10:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:59.243 06:10:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:17:59.243 06:10:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:59.243 06:10:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:17:59.243 06:10:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:17:59.502 06:10:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_get_config bdev 00:17:59.502 06:10:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:17:59.762 06:10:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:00:10.0 00:17:59.762 06:10:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:00.330 06:10:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:18:00.330 06:10:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:00:10.0 ']' 00:18:00.330 06:10:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:18:00.330 06:10:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:18:00.330 06:10:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:00.330 [2024-10-01 06:10:25.892650] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:00.330 06:10:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:00.898 06:10:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:18:00.898 06:10:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:00.898 06:10:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:18:00.898 06:10:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:18:01.157 06:10:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:18:01.415 [2024-10-01 06:10:26.913938] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:18:01.415 06:10:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:18:01.674 06:10:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:18:01.674 06:10:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:18:01.674 06:10:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:18:01.674 06:10:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:18:03.051 Initializing NVMe Controllers 00:18:03.051 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:18:03.051 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:18:03.051 Initialization complete. Launching workers. 00:18:03.051 ======================================================== 00:18:03.051 Latency(us) 00:18:03.051 Device Information : IOPS MiB/s Average min max 00:18:03.051 PCIE (0000:00:10.0) NSID 1 from core 0: 24032.00 93.88 1330.79 350.20 8129.31 00:18:03.051 ======================================================== 00:18:03.051 Total : 24032.00 93.88 1330.79 350.20 8129.31 00:18:03.051 00:18:03.051 06:10:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:18:03.998 Initializing NVMe Controllers 00:18:03.998 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:18:03.998 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:03.998 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:18:03.998 Initialization complete. Launching workers. 00:18:03.998 ======================================================== 00:18:03.998 Latency(us) 00:18:03.998 Device Information : IOPS MiB/s Average min max 00:18:03.998 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3916.94 15.30 254.97 95.00 4259.65 00:18:03.998 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 125.00 0.49 8055.67 4962.35 11998.13 00:18:03.998 ======================================================== 00:18:03.998 Total : 4041.94 15.79 496.20 95.00 11998.13 00:18:03.998 00:18:04.257 06:10:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:18:05.665 Initializing NVMe Controllers 00:18:05.665 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:18:05.665 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:05.665 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:18:05.665 Initialization complete. Launching workers. 00:18:05.665 ======================================================== 00:18:05.665 Latency(us) 00:18:05.665 Device Information : IOPS MiB/s Average min max 00:18:05.665 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8604.95 33.61 3719.13 509.38 9393.42 00:18:05.665 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3952.36 15.44 8096.97 6646.33 17241.99 00:18:05.665 ======================================================== 00:18:05.666 Total : 12557.31 49.05 5097.04 509.38 17241.99 00:18:05.666 00:18:05.666 06:10:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ '' == \e\8\1\0 ]] 00:18:05.666 06:10:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:18:08.212 Initializing NVMe Controllers 00:18:08.212 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:18:08.212 Controller IO queue size 128, less than required. 00:18:08.212 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:18:08.212 Controller IO queue size 128, less than required. 00:18:08.212 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:18:08.212 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:08.212 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:18:08.212 Initialization complete. Launching workers. 00:18:08.212 ======================================================== 00:18:08.212 Latency(us) 00:18:08.212 Device Information : IOPS MiB/s Average min max 00:18:08.212 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1905.15 476.29 68100.20 38463.23 99196.30 00:18:08.212 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 661.80 165.45 202646.12 76628.61 327163.59 00:18:08.212 ======================================================== 00:18:08.212 Total : 2566.95 641.74 102788.02 38463.23 327163.59 00:18:08.212 00:18:08.212 06:10:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -c 0xf -P 4 00:18:08.212 Initializing NVMe Controllers 00:18:08.212 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:18:08.212 Controller IO queue size 128, less than required. 00:18:08.212 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:18:08.212 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:18:08.212 Controller IO queue size 128, less than required. 00:18:08.212 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:18:08.212 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 4096. Removing this ns from test 00:18:08.212 WARNING: Some requested NVMe devices were skipped 00:18:08.212 No valid NVMe controllers or AIO or URING devices found 00:18:08.212 06:10:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' --transport-stat 00:18:10.747 Initializing NVMe Controllers 00:18:10.747 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:18:10.747 Controller IO queue size 128, less than required. 00:18:10.747 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:18:10.747 Controller IO queue size 128, less than required. 00:18:10.747 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:18:10.747 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:10.747 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:18:10.747 Initialization complete. Launching workers. 00:18:10.747 00:18:10.747 ==================== 00:18:10.747 lcore 0, ns TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:18:10.747 TCP transport: 00:18:10.747 polls: 9357 00:18:10.747 idle_polls: 5515 00:18:10.747 sock_completions: 3842 00:18:10.747 nvme_completions: 6553 00:18:10.747 submitted_requests: 9960 00:18:10.747 queued_requests: 1 00:18:10.747 00:18:10.747 ==================== 00:18:10.747 lcore 0, ns TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:18:10.747 TCP transport: 00:18:10.747 polls: 10998 00:18:10.747 idle_polls: 7234 00:18:10.747 sock_completions: 3764 00:18:10.747 nvme_completions: 6569 00:18:10.747 submitted_requests: 9942 00:18:10.747 queued_requests: 1 00:18:10.747 ======================================================== 00:18:10.747 Latency(us) 00:18:10.747 Device Information : IOPS MiB/s Average min max 00:18:10.747 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1633.53 408.38 80125.97 37268.91 127161.64 00:18:10.747 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1637.51 409.38 78644.50 31511.80 144208.23 00:18:10.747 ======================================================== 00:18:10.747 Total : 3271.04 817.76 79384.33 31511.80 144208.23 00:18:10.747 00:18:10.747 06:10:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:18:10.747 06:10:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:11.314 06:10:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:18:11.314 06:10:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@71 -- # '[' -n 0000:00:10.0 ']' 00:18:11.314 06:10:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:18:11.314 06:10:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # ls_guid=66270355-3802-4c0d-8d1f-6484ed5004d3 00:18:11.314 06:10:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@73 -- # get_lvs_free_mb 66270355-3802-4c0d-8d1f-6484ed5004d3 00:18:11.314 06:10:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1364 -- # local lvs_uuid=66270355-3802-4c0d-8d1f-6484ed5004d3 00:18:11.314 06:10:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1365 -- # local lvs_info 00:18:11.314 06:10:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1366 -- # local fc 00:18:11.315 06:10:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1367 -- # local cs 00:18:11.315 06:10:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:18:11.882 06:10:37 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:18:11.882 { 00:18:11.882 "uuid": "66270355-3802-4c0d-8d1f-6484ed5004d3", 00:18:11.882 "name": "lvs_0", 00:18:11.882 "base_bdev": "Nvme0n1", 00:18:11.882 "total_data_clusters": 1278, 00:18:11.882 "free_clusters": 1278, 00:18:11.882 "block_size": 4096, 00:18:11.882 "cluster_size": 4194304 00:18:11.882 } 00:18:11.882 ]' 00:18:11.882 06:10:37 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="66270355-3802-4c0d-8d1f-6484ed5004d3") .free_clusters' 00:18:11.882 06:10:37 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # fc=1278 00:18:11.882 06:10:37 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="66270355-3802-4c0d-8d1f-6484ed5004d3") .cluster_size' 00:18:11.882 5112 00:18:11.882 06:10:37 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # cs=4194304 00:18:11.882 06:10:37 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # free_mb=5112 00:18:11.882 06:10:37 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # echo 5112 00:18:11.882 06:10:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@77 -- # '[' 5112 -gt 20480 ']' 00:18:11.882 06:10:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 66270355-3802-4c0d-8d1f-6484ed5004d3 lbd_0 5112 00:18:12.140 06:10:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # lb_guid=104819a2-7e00-4926-b7cf-7b2e4cec0d03 00:18:12.140 06:10:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore 104819a2-7e00-4926-b7cf-7b2e4cec0d03 lvs_n_0 00:18:12.399 06:10:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # ls_nested_guid=cf4caf2e-5fa4-4d3e-9868-024a0b663e17 00:18:12.399 06:10:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@84 -- # get_lvs_free_mb cf4caf2e-5fa4-4d3e-9868-024a0b663e17 00:18:12.399 06:10:37 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1364 -- # local lvs_uuid=cf4caf2e-5fa4-4d3e-9868-024a0b663e17 00:18:12.399 06:10:37 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1365 -- # local lvs_info 00:18:12.399 06:10:37 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1366 -- # local fc 00:18:12.399 06:10:37 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1367 -- # local cs 00:18:12.399 06:10:37 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:18:12.659 06:10:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:18:12.659 { 00:18:12.659 "uuid": "66270355-3802-4c0d-8d1f-6484ed5004d3", 00:18:12.659 "name": "lvs_0", 00:18:12.659 "base_bdev": "Nvme0n1", 00:18:12.659 "total_data_clusters": 1278, 00:18:12.659 "free_clusters": 0, 00:18:12.659 "block_size": 4096, 00:18:12.659 "cluster_size": 4194304 00:18:12.659 }, 00:18:12.659 { 00:18:12.659 "uuid": "cf4caf2e-5fa4-4d3e-9868-024a0b663e17", 00:18:12.659 "name": "lvs_n_0", 00:18:12.659 "base_bdev": "104819a2-7e00-4926-b7cf-7b2e4cec0d03", 00:18:12.659 "total_data_clusters": 1276, 00:18:12.659 "free_clusters": 1276, 00:18:12.659 "block_size": 4096, 00:18:12.659 "cluster_size": 4194304 00:18:12.659 } 00:18:12.659 ]' 00:18:12.659 06:10:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="cf4caf2e-5fa4-4d3e-9868-024a0b663e17") .free_clusters' 00:18:12.917 06:10:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # fc=1276 00:18:12.917 06:10:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="cf4caf2e-5fa4-4d3e-9868-024a0b663e17") .cluster_size' 00:18:12.917 06:10:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # cs=4194304 00:18:12.917 5104 00:18:12.917 06:10:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # free_mb=5104 00:18:12.917 06:10:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # echo 5104 00:18:12.917 06:10:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@85 -- # '[' 5104 -gt 20480 ']' 00:18:12.917 06:10:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u cf4caf2e-5fa4-4d3e-9868-024a0b663e17 lbd_nest_0 5104 00:18:13.175 06:10:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # lb_nested_guid=55e72a1a-41db-4096-bc92-86ec71866461 00:18:13.175 06:10:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:13.434 06:10:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:18:13.434 06:10:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 55e72a1a-41db-4096-bc92-86ec71866461 00:18:13.692 06:10:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:18:13.950 06:10:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:18:13.950 06:10:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@96 -- # io_size=("512" "131072") 00:18:13.950 06:10:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:18:13.950 06:10:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:18:13.950 06:10:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:18:14.208 Initializing NVMe Controllers 00:18:14.208 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:18:14.208 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:18:14.208 WARNING: Some requested NVMe devices were skipped 00:18:14.208 No valid NVMe controllers or AIO or URING devices found 00:18:14.208 06:10:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:18:14.208 06:10:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:18:26.413 Initializing NVMe Controllers 00:18:26.413 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:18:26.413 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:26.413 Initialization complete. Launching workers. 00:18:26.413 ======================================================== 00:18:26.413 Latency(us) 00:18:26.414 Device Information : IOPS MiB/s Average min max 00:18:26.414 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 974.20 121.77 1025.68 325.12 8486.24 00:18:26.414 ======================================================== 00:18:26.414 Total : 974.20 121.77 1025.68 325.12 8486.24 00:18:26.414 00:18:26.414 06:10:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:18:26.414 06:10:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:18:26.414 06:10:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:18:26.414 Initializing NVMe Controllers 00:18:26.414 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:18:26.414 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:18:26.414 WARNING: Some requested NVMe devices were skipped 00:18:26.414 No valid NVMe controllers or AIO or URING devices found 00:18:26.414 06:10:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:18:26.414 06:10:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:18:36.391 Initializing NVMe Controllers 00:18:36.391 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:18:36.391 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:36.391 Initialization complete. Launching workers. 00:18:36.391 ======================================================== 00:18:36.391 Latency(us) 00:18:36.391 Device Information : IOPS MiB/s Average min max 00:18:36.391 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1325.31 165.66 24168.47 5326.45 67638.34 00:18:36.391 ======================================================== 00:18:36.391 Total : 1325.31 165.66 24168.47 5326.45 67638.34 00:18:36.391 00:18:36.391 06:11:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:18:36.391 06:11:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:18:36.391 06:11:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:18:36.391 Initializing NVMe Controllers 00:18:36.391 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:18:36.391 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:18:36.391 WARNING: Some requested NVMe devices were skipped 00:18:36.391 No valid NVMe controllers or AIO or URING devices found 00:18:36.391 06:11:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:18:36.391 06:11:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:18:46.368 Initializing NVMe Controllers 00:18:46.368 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:18:46.368 Controller IO queue size 128, less than required. 00:18:46.368 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:18:46.368 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:46.368 Initialization complete. Launching workers. 00:18:46.368 ======================================================== 00:18:46.368 Latency(us) 00:18:46.368 Device Information : IOPS MiB/s Average min max 00:18:46.368 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 4130.23 516.28 31018.43 7598.58 67146.76 00:18:46.368 ======================================================== 00:18:46.368 Total : 4130.23 516.28 31018.43 7598.58 67146.76 00:18:46.368 00:18:46.368 06:11:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:46.368 06:11:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 55e72a1a-41db-4096-bc92-86ec71866461 00:18:46.368 06:11:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:18:46.627 06:11:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 104819a2-7e00-4926-b7cf-7b2e4cec0d03 00:18:46.886 06:11:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:18:47.145 06:11:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:18:47.145 06:11:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:18:47.145 06:11:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # nvmfcleanup 00:18:47.145 06:11:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:18:47.145 06:11:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:47.145 06:11:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:18:47.145 06:11:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:47.145 06:11:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:47.145 rmmod nvme_tcp 00:18:47.145 rmmod nvme_fabrics 00:18:47.145 rmmod nvme_keyring 00:18:47.404 06:11:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:47.404 06:11:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:18:47.404 06:11:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:18:47.404 06:11:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@513 -- # '[' -n 88071 ']' 00:18:47.404 06:11:12 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@514 -- # killprocess 88071 00:18:47.404 06:11:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@950 -- # '[' -z 88071 ']' 00:18:47.404 06:11:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # kill -0 88071 00:18:47.404 06:11:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # uname 00:18:47.404 06:11:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:47.404 06:11:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 88071 00:18:47.404 killing process with pid 88071 00:18:47.404 06:11:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:47.404 06:11:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:47.404 06:11:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 88071' 00:18:47.404 06:11:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@969 -- # kill 88071 00:18:47.404 06:11:12 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@974 -- # wait 88071 00:18:48.780 06:11:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:18:48.780 06:11:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:18:48.780 06:11:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:18:48.780 06:11:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:18:48.780 06:11:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@787 -- # iptables-save 00:18:48.780 06:11:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:18:48.780 06:11:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@787 -- # iptables-restore 00:18:48.780 06:11:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:48.780 06:11:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:18:48.780 06:11:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:18:48.780 06:11:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:18:48.780 06:11:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:18:48.780 06:11:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:18:48.780 06:11:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:18:48.780 06:11:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:18:48.780 06:11:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:18:48.780 06:11:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:18:48.780 06:11:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:18:48.780 06:11:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:18:48.780 06:11:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:18:48.780 06:11:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:48.780 06:11:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:48.780 06:11:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@246 -- # remove_spdk_ns 00:18:48.780 06:11:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:48.780 06:11:14 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:48.780 06:11:14 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:48.780 06:11:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@300 -- # return 0 00:18:48.780 00:18:48.780 real 0m50.673s 00:18:48.780 user 3m10.199s 00:18:48.780 sys 0m12.068s 00:18:48.780 ************************************ 00:18:48.780 END TEST nvmf_perf 00:18:48.780 ************************************ 00:18:48.780 06:11:14 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:48.780 06:11:14 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:18:49.040 06:11:14 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:18:49.040 06:11:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:18:49.041 06:11:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:49.041 06:11:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:18:49.041 ************************************ 00:18:49.041 START TEST nvmf_fio_host 00:18:49.041 ************************************ 00:18:49.041 06:11:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:18:49.041 * Looking for test storage... 00:18:49.041 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:18:49.041 06:11:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:18:49.041 06:11:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1681 -- # lcov --version 00:18:49.041 06:11:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:18:49.041 06:11:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:18:49.041 06:11:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:49.041 06:11:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:49.041 06:11:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:49.041 06:11:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:18:49.041 06:11:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:18:49.041 06:11:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:18:49.041 06:11:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:18:49.041 06:11:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:18:49.041 06:11:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:18:49.041 06:11:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:18:49.041 06:11:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:49.041 06:11:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:18:49.041 06:11:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:18:49.041 06:11:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:49.041 06:11:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:49.041 06:11:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:18:49.041 06:11:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:18:49.041 06:11:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:49.041 06:11:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:18:49.041 06:11:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:18:49.041 06:11:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:18:49.041 06:11:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:18:49.041 06:11:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:49.041 06:11:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:18:49.041 06:11:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:18:49.041 06:11:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:49.041 06:11:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:49.041 06:11:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:18:49.041 06:11:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:49.041 06:11:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:18:49.041 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:49.041 --rc genhtml_branch_coverage=1 00:18:49.041 --rc genhtml_function_coverage=1 00:18:49.041 --rc genhtml_legend=1 00:18:49.041 --rc geninfo_all_blocks=1 00:18:49.041 --rc geninfo_unexecuted_blocks=1 00:18:49.041 00:18:49.041 ' 00:18:49.041 06:11:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:18:49.041 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:49.041 --rc genhtml_branch_coverage=1 00:18:49.041 --rc genhtml_function_coverage=1 00:18:49.041 --rc genhtml_legend=1 00:18:49.041 --rc geninfo_all_blocks=1 00:18:49.041 --rc geninfo_unexecuted_blocks=1 00:18:49.041 00:18:49.041 ' 00:18:49.041 06:11:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:18:49.041 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:49.041 --rc genhtml_branch_coverage=1 00:18:49.041 --rc genhtml_function_coverage=1 00:18:49.041 --rc genhtml_legend=1 00:18:49.041 --rc geninfo_all_blocks=1 00:18:49.041 --rc geninfo_unexecuted_blocks=1 00:18:49.041 00:18:49.041 ' 00:18:49.041 06:11:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:18:49.041 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:49.041 --rc genhtml_branch_coverage=1 00:18:49.041 --rc genhtml_function_coverage=1 00:18:49.041 --rc genhtml_legend=1 00:18:49.041 --rc geninfo_all_blocks=1 00:18:49.041 --rc geninfo_unexecuted_blocks=1 00:18:49.041 00:18:49.041 ' 00:18:49.041 06:11:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:49.041 06:11:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:18:49.041 06:11:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:49.041 06:11:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:49.041 06:11:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:49.041 06:11:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:49.041 06:11:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:49.041 06:11:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:49.041 06:11:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:18:49.041 06:11:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:49.041 06:11:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:49.041 06:11:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:18:49.041 06:11:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:49.041 06:11:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:49.041 06:11:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:49.041 06:11:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:49.041 06:11:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:49.041 06:11:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:49.041 06:11:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:49.041 06:11:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:49.041 06:11:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:49.041 06:11:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:49.041 06:11:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 00:18:49.041 06:11:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=a979a798-a221-4879-b3c4-5aaa753fde06 00:18:49.041 06:11:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:49.041 06:11:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:49.041 06:11:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:49.041 06:11:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:49.041 06:11:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:49.041 06:11:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:18:49.041 06:11:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:49.041 06:11:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:49.041 06:11:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:49.041 06:11:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:49.042 06:11:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:49.042 06:11:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:49.042 06:11:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:18:49.042 06:11:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:49.042 06:11:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:18:49.042 06:11:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:49.042 06:11:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:49.042 06:11:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:49.042 06:11:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:49.042 06:11:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:49.042 06:11:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:49.042 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:49.042 06:11:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:49.042 06:11:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:49.042 06:11:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:49.042 06:11:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:49.042 06:11:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:18:49.042 06:11:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:18:49.042 06:11:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:49.042 06:11:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@472 -- # prepare_net_devs 00:18:49.042 06:11:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@434 -- # local -g is_hw=no 00:18:49.042 06:11:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@436 -- # remove_spdk_ns 00:18:49.042 06:11:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:49.042 06:11:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:49.042 06:11:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:49.042 06:11:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:18:49.042 06:11:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:18:49.042 06:11:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:18:49.042 06:11:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:18:49.042 06:11:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:18:49.042 06:11:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@456 -- # nvmf_veth_init 00:18:49.042 06:11:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:49.042 06:11:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:18:49.042 06:11:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:18:49.042 06:11:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:18:49.042 06:11:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:49.042 06:11:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:18:49.042 06:11:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:49.042 06:11:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:18:49.042 06:11:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:49.042 06:11:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:18:49.042 06:11:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:49.042 06:11:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:49.042 06:11:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:49.042 06:11:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:49.042 06:11:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:49.042 06:11:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:49.042 06:11:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:18:49.042 Cannot find device "nvmf_init_br" 00:18:49.042 06:11:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@162 -- # true 00:18:49.042 06:11:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:18:49.042 Cannot find device "nvmf_init_br2" 00:18:49.042 06:11:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@163 -- # true 00:18:49.042 06:11:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:18:49.301 Cannot find device "nvmf_tgt_br" 00:18:49.301 06:11:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@164 -- # true 00:18:49.301 06:11:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:18:49.301 Cannot find device "nvmf_tgt_br2" 00:18:49.301 06:11:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@165 -- # true 00:18:49.301 06:11:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:18:49.301 Cannot find device "nvmf_init_br" 00:18:49.301 06:11:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@166 -- # true 00:18:49.301 06:11:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:18:49.301 Cannot find device "nvmf_init_br2" 00:18:49.301 06:11:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@167 -- # true 00:18:49.301 06:11:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:18:49.301 Cannot find device "nvmf_tgt_br" 00:18:49.301 06:11:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@168 -- # true 00:18:49.301 06:11:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:18:49.301 Cannot find device "nvmf_tgt_br2" 00:18:49.301 06:11:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@169 -- # true 00:18:49.301 06:11:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:18:49.301 Cannot find device "nvmf_br" 00:18:49.301 06:11:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@170 -- # true 00:18:49.301 06:11:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:18:49.301 Cannot find device "nvmf_init_if" 00:18:49.301 06:11:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@171 -- # true 00:18:49.301 06:11:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:18:49.301 Cannot find device "nvmf_init_if2" 00:18:49.301 06:11:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@172 -- # true 00:18:49.301 06:11:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:49.301 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:49.301 06:11:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@173 -- # true 00:18:49.301 06:11:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:49.301 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:49.301 06:11:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@174 -- # true 00:18:49.301 06:11:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:18:49.301 06:11:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:49.301 06:11:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:18:49.301 06:11:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:49.301 06:11:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:49.301 06:11:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:49.301 06:11:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:49.301 06:11:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:49.301 06:11:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:18:49.301 06:11:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:18:49.301 06:11:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:18:49.301 06:11:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:18:49.301 06:11:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:18:49.301 06:11:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:18:49.301 06:11:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:18:49.301 06:11:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:18:49.301 06:11:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:18:49.301 06:11:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:49.301 06:11:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:49.301 06:11:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:49.301 06:11:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:18:49.301 06:11:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:18:49.301 06:11:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:18:49.561 06:11:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:18:49.561 06:11:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:49.561 06:11:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:49.561 06:11:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:49.561 06:11:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:18:49.561 06:11:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:18:49.561 06:11:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:18:49.561 06:11:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:49.561 06:11:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:18:49.561 06:11:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:18:49.561 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:49.561 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.097 ms 00:18:49.561 00:18:49.561 --- 10.0.0.3 ping statistics --- 00:18:49.561 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:49.561 rtt min/avg/max/mdev = 0.097/0.097/0.097/0.000 ms 00:18:49.561 06:11:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:18:49.561 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:18:49.561 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.060 ms 00:18:49.561 00:18:49.561 --- 10.0.0.4 ping statistics --- 00:18:49.561 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:49.561 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:18:49.561 06:11:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:49.561 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:49.561 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.038 ms 00:18:49.561 00:18:49.561 --- 10.0.0.1 ping statistics --- 00:18:49.561 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:49.561 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:18:49.561 06:11:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:18:49.561 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:49.561 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.074 ms 00:18:49.561 00:18:49.561 --- 10.0.0.2 ping statistics --- 00:18:49.561 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:49.561 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:18:49.561 06:11:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:49.561 06:11:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@457 -- # return 0 00:18:49.561 06:11:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:18:49.561 06:11:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:49.561 06:11:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:18:49.561 06:11:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:18:49.561 06:11:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:49.561 06:11:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:18:49.561 06:11:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:18:49.561 06:11:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:18:49.561 06:11:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:18:49.561 06:11:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:49.561 06:11:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:18:49.561 06:11:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=88934 00:18:49.561 06:11:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:49.561 06:11:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:49.561 06:11:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 88934 00:18:49.561 06:11:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@831 -- # '[' -z 88934 ']' 00:18:49.561 06:11:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:49.561 06:11:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:49.561 06:11:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:49.561 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:49.561 06:11:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:49.561 06:11:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:18:49.561 [2024-10-01 06:11:15.093152] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:18:49.561 [2024-10-01 06:11:15.093979] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:49.835 [2024-10-01 06:11:15.237322] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:49.835 [2024-10-01 06:11:15.279998] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:49.835 [2024-10-01 06:11:15.280227] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:49.835 [2024-10-01 06:11:15.280412] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:49.835 [2024-10-01 06:11:15.280569] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:49.835 [2024-10-01 06:11:15.280611] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:49.835 [2024-10-01 06:11:15.280857] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:18:49.835 [2024-10-01 06:11:15.280980] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:18:49.835 [2024-10-01 06:11:15.281069] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:18:49.835 [2024-10-01 06:11:15.281071] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:18:49.835 [2024-10-01 06:11:15.315499] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:50.784 06:11:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:50.784 06:11:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # return 0 00:18:50.784 06:11:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:18:50.784 [2024-10-01 06:11:16.355889] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:50.784 06:11:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:18:50.784 06:11:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:50.784 06:11:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:18:51.043 06:11:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:18:51.043 Malloc1 00:18:51.301 06:11:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:51.301 06:11:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:51.560 06:11:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:18:51.819 [2024-10-01 06:11:17.378643] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:18:51.819 06:11:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:18:52.078 06:11:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:18:52.078 06:11:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:18:52.078 06:11:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:18:52.078 06:11:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:18:52.078 06:11:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:52.078 06:11:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:18:52.078 06:11:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:18:52.078 06:11:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:18:52.078 06:11:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:18:52.078 06:11:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:18:52.078 06:11:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:18:52.078 06:11:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:18:52.078 06:11:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:18:52.078 06:11:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:18:52.078 06:11:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:18:52.078 06:11:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:18:52.078 06:11:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:18:52.078 06:11:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:18:52.078 06:11:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:18:52.078 06:11:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:18:52.078 06:11:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:18:52.078 06:11:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:18:52.078 06:11:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:18:52.336 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:18:52.336 fio-3.35 00:18:52.336 Starting 1 thread 00:18:54.868 00:18:54.868 test: (groupid=0, jobs=1): err= 0: pid=89011: Tue Oct 1 06:11:20 2024 00:18:54.868 read: IOPS=9443, BW=36.9MiB/s (38.7MB/s)(74.1MiB/2008msec) 00:18:54.868 slat (nsec): min=1850, max=306144, avg=2394.90, stdev=3155.74 00:18:54.868 clat (usec): min=2094, max=14765, avg=7040.93, stdev=618.62 00:18:54.868 lat (usec): min=2128, max=14768, avg=7043.33, stdev=618.38 00:18:54.868 clat percentiles (usec): 00:18:54.868 | 1.00th=[ 5932], 5.00th=[ 6259], 10.00th=[ 6456], 20.00th=[ 6587], 00:18:54.869 | 30.00th=[ 6718], 40.00th=[ 6849], 50.00th=[ 6980], 60.00th=[ 7111], 00:18:54.869 | 70.00th=[ 7242], 80.00th=[ 7439], 90.00th=[ 7701], 95.00th=[ 7963], 00:18:54.869 | 99.00th=[ 8717], 99.50th=[ 9110], 99.90th=[13304], 99.95th=[13829], 00:18:54.869 | 99.99th=[14615] 00:18:54.869 bw ( KiB/s): min=36670, max=38688, per=99.98%, avg=37769.50, stdev=911.43, samples=4 00:18:54.869 iops : min= 9167, max= 9672, avg=9442.25, stdev=228.06, samples=4 00:18:54.869 write: IOPS=9443, BW=36.9MiB/s (38.7MB/s)(74.1MiB/2008msec); 0 zone resets 00:18:54.869 slat (nsec): min=1959, max=130686, avg=2532.97, stdev=1828.69 00:18:54.869 clat (usec): min=1998, max=14160, avg=6426.05, stdev=562.44 00:18:54.869 lat (usec): min=2012, max=14162, avg=6428.58, stdev=562.33 00:18:54.869 clat percentiles (usec): 00:18:54.869 | 1.00th=[ 5407], 5.00th=[ 5735], 10.00th=[ 5866], 20.00th=[ 6063], 00:18:54.869 | 30.00th=[ 6194], 40.00th=[ 6259], 50.00th=[ 6390], 60.00th=[ 6521], 00:18:54.869 | 70.00th=[ 6587], 80.00th=[ 6783], 90.00th=[ 7046], 95.00th=[ 7242], 00:18:54.869 | 99.00th=[ 7832], 99.50th=[ 8160], 99.90th=[12387], 99.95th=[13304], 00:18:54.869 | 99.99th=[14091] 00:18:54.869 bw ( KiB/s): min=36894, max=38552, per=100.00%, avg=37791.50, stdev=706.45, samples=4 00:18:54.869 iops : min= 9223, max= 9638, avg=9447.75, stdev=176.82, samples=4 00:18:54.869 lat (msec) : 2=0.01%, 4=0.18%, 10=99.59%, 20=0.23% 00:18:54.869 cpu : usr=69.01%, sys=23.32%, ctx=9, majf=0, minf=6 00:18:54.869 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:18:54.869 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:54.869 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:54.869 issued rwts: total=18963,18963,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:54.869 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:54.869 00:18:54.869 Run status group 0 (all jobs): 00:18:54.869 READ: bw=36.9MiB/s (38.7MB/s), 36.9MiB/s-36.9MiB/s (38.7MB/s-38.7MB/s), io=74.1MiB (77.7MB), run=2008-2008msec 00:18:54.869 WRITE: bw=36.9MiB/s (38.7MB/s), 36.9MiB/s-36.9MiB/s (38.7MB/s-38.7MB/s), io=74.1MiB (77.7MB), run=2008-2008msec 00:18:54.869 06:11:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:18:54.869 06:11:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:18:54.869 06:11:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:18:54.869 06:11:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:54.869 06:11:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:18:54.869 06:11:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:18:54.869 06:11:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:18:54.869 06:11:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:18:54.869 06:11:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:18:54.869 06:11:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:18:54.869 06:11:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:18:54.869 06:11:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:18:54.869 06:11:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:18:54.869 06:11:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:18:54.869 06:11:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:18:54.869 06:11:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:18:54.869 06:11:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:18:54.869 06:11:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:18:54.869 06:11:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:18:54.869 06:11:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:18:54.869 06:11:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:18:54.869 06:11:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:18:54.869 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:18:54.869 fio-3.35 00:18:54.869 Starting 1 thread 00:18:57.402 00:18:57.402 test: (groupid=0, jobs=1): err= 0: pid=89060: Tue Oct 1 06:11:22 2024 00:18:57.402 read: IOPS=8873, BW=139MiB/s (145MB/s)(278MiB/2005msec) 00:18:57.402 slat (usec): min=2, max=130, avg= 3.68, stdev= 2.43 00:18:57.402 clat (usec): min=2507, max=16420, avg=8099.04, stdev=2613.70 00:18:57.402 lat (usec): min=2510, max=16423, avg=8102.71, stdev=2613.78 00:18:57.402 clat percentiles (usec): 00:18:57.402 | 1.00th=[ 3720], 5.00th=[ 4424], 10.00th=[ 4948], 20.00th=[ 5735], 00:18:57.402 | 30.00th=[ 6456], 40.00th=[ 7046], 50.00th=[ 7701], 60.00th=[ 8586], 00:18:57.402 | 70.00th=[ 9372], 80.00th=[10290], 90.00th=[11731], 95.00th=[13042], 00:18:57.402 | 99.00th=[15139], 99.50th=[15533], 99.90th=[16188], 99.95th=[16319], 00:18:57.402 | 99.99th=[16450] 00:18:57.402 bw ( KiB/s): min=66656, max=73728, per=49.43%, avg=70184.00, stdev=3190.85, samples=4 00:18:57.402 iops : min= 4166, max= 4608, avg=4386.50, stdev=199.43, samples=4 00:18:57.402 write: IOPS=4984, BW=77.9MiB/s (81.7MB/s)(143MiB/1834msec); 0 zone resets 00:18:57.402 slat (usec): min=31, max=352, avg=37.10, stdev= 9.64 00:18:57.402 clat (usec): min=3478, max=18660, avg=11311.35, stdev=2227.10 00:18:57.402 lat (usec): min=3510, max=18691, avg=11348.45, stdev=2228.82 00:18:57.402 clat percentiles (usec): 00:18:57.402 | 1.00th=[ 6849], 5.00th=[ 8225], 10.00th=[ 8848], 20.00th=[ 9503], 00:18:57.402 | 30.00th=[10028], 40.00th=[10552], 50.00th=[11076], 60.00th=[11469], 00:18:57.402 | 70.00th=[12256], 80.00th=[13173], 90.00th=[14353], 95.00th=[15533], 00:18:57.402 | 99.00th=[17171], 99.50th=[17433], 99.90th=[18220], 99.95th=[18220], 00:18:57.402 | 99.99th=[18744] 00:18:57.402 bw ( KiB/s): min=68800, max=75424, per=90.86%, avg=72456.00, stdev=2955.15, samples=4 00:18:57.402 iops : min= 4300, max= 4714, avg=4528.50, stdev=184.70, samples=4 00:18:57.402 lat (msec) : 4=1.46%, 10=59.58%, 20=38.96% 00:18:57.402 cpu : usr=79.89%, sys=15.42%, ctx=24, majf=0, minf=2 00:18:57.402 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:18:57.402 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:57.402 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:57.402 issued rwts: total=17791,9141,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:57.402 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:57.402 00:18:57.402 Run status group 0 (all jobs): 00:18:57.402 READ: bw=139MiB/s (145MB/s), 139MiB/s-139MiB/s (145MB/s-145MB/s), io=278MiB (291MB), run=2005-2005msec 00:18:57.402 WRITE: bw=77.9MiB/s (81.7MB/s), 77.9MiB/s-77.9MiB/s (81.7MB/s-81.7MB/s), io=143MiB (150MB), run=1834-1834msec 00:18:57.402 06:11:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:57.402 06:11:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:18:57.402 06:11:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:18:57.402 06:11:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # get_nvme_bdfs 00:18:57.402 06:11:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1496 -- # bdfs=() 00:18:57.402 06:11:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1496 -- # local bdfs 00:18:57.402 06:11:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:18:57.402 06:11:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:18:57.402 06:11:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:18:57.402 06:11:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1498 -- # (( 2 == 0 )) 00:18:57.402 06:11:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:18:57.402 06:11:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 -i 10.0.0.3 00:18:57.661 Nvme0n1 00:18:57.661 06:11:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:18:57.920 06:11:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # ls_guid=8003cead-9373-4f79-a944-80b098455ec4 00:18:57.920 06:11:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@54 -- # get_lvs_free_mb 8003cead-9373-4f79-a944-80b098455ec4 00:18:57.920 06:11:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # local lvs_uuid=8003cead-9373-4f79-a944-80b098455ec4 00:18:57.920 06:11:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1365 -- # local lvs_info 00:18:57.920 06:11:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1366 -- # local fc 00:18:57.920 06:11:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1367 -- # local cs 00:18:57.920 06:11:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:18:58.179 06:11:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:18:58.179 { 00:18:58.179 "uuid": "8003cead-9373-4f79-a944-80b098455ec4", 00:18:58.179 "name": "lvs_0", 00:18:58.179 "base_bdev": "Nvme0n1", 00:18:58.179 "total_data_clusters": 4, 00:18:58.179 "free_clusters": 4, 00:18:58.179 "block_size": 4096, 00:18:58.179 "cluster_size": 1073741824 00:18:58.179 } 00:18:58.179 ]' 00:18:58.179 06:11:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="8003cead-9373-4f79-a944-80b098455ec4") .free_clusters' 00:18:58.179 06:11:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # fc=4 00:18:58.179 06:11:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="8003cead-9373-4f79-a944-80b098455ec4") .cluster_size' 00:18:58.437 4096 00:18:58.437 06:11:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # cs=1073741824 00:18:58.437 06:11:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # free_mb=4096 00:18:58.437 06:11:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # echo 4096 00:18:58.437 06:11:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 4096 00:18:58.695 ba7f1713-f0f0-43a8-8e70-5f16c46de889 00:18:58.695 06:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:18:58.954 06:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:18:59.212 06:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:18:59.471 06:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@59 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:18:59.471 06:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:18:59.471 06:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:18:59.471 06:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:59.471 06:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:18:59.471 06:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:18:59.471 06:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:18:59.471 06:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:18:59.471 06:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:18:59.471 06:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:18:59.471 06:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:18:59.471 06:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:18:59.471 06:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:18:59.471 06:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:18:59.471 06:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:18:59.471 06:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:18:59.471 06:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:18:59.471 06:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:18:59.471 06:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:18:59.471 06:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:18:59.471 06:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:18:59.471 06:11:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:18:59.471 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:18:59.471 fio-3.35 00:18:59.471 Starting 1 thread 00:19:02.003 00:19:02.003 test: (groupid=0, jobs=1): err= 0: pid=89170: Tue Oct 1 06:11:27 2024 00:19:02.003 read: IOPS=6367, BW=24.9MiB/s (26.1MB/s)(50.0MiB/2009msec) 00:19:02.003 slat (nsec): min=1842, max=295361, avg=2888.09, stdev=4064.39 00:19:02.003 clat (usec): min=2913, max=18632, avg=10488.54, stdev=888.46 00:19:02.003 lat (usec): min=2922, max=18634, avg=10491.42, stdev=888.10 00:19:02.003 clat percentiles (usec): 00:19:02.003 | 1.00th=[ 8586], 5.00th=[ 9241], 10.00th=[ 9503], 20.00th=[ 9765], 00:19:02.003 | 30.00th=[10028], 40.00th=[10290], 50.00th=[10421], 60.00th=[10683], 00:19:02.003 | 70.00th=[10945], 80.00th=[11207], 90.00th=[11469], 95.00th=[11863], 00:19:02.003 | 99.00th=[12518], 99.50th=[12911], 99.90th=[16450], 99.95th=[17171], 00:19:02.003 | 99.99th=[18482] 00:19:02.003 bw ( KiB/s): min=24496, max=26208, per=99.96%, avg=25460.00, stdev=741.96, samples=4 00:19:02.003 iops : min= 6124, max= 6552, avg=6365.00, stdev=185.49, samples=4 00:19:02.003 write: IOPS=6367, BW=24.9MiB/s (26.1MB/s)(50.0MiB/2009msec); 0 zone resets 00:19:02.003 slat (nsec): min=1974, max=231387, avg=3001.81, stdev=3048.25 00:19:02.003 clat (usec): min=2371, max=18288, avg=9528.87, stdev=839.39 00:19:02.003 lat (usec): min=2384, max=18290, avg=9531.87, stdev=839.24 00:19:02.003 clat percentiles (usec): 00:19:02.003 | 1.00th=[ 7767], 5.00th=[ 8356], 10.00th=[ 8586], 20.00th=[ 8848], 00:19:02.003 | 30.00th=[ 9110], 40.00th=[ 9372], 50.00th=[ 9503], 60.00th=[ 9634], 00:19:02.003 | 70.00th=[ 9896], 80.00th=[10159], 90.00th=[10552], 95.00th=[10814], 00:19:02.003 | 99.00th=[11469], 99.50th=[11731], 99.90th=[16188], 99.95th=[17171], 00:19:02.003 | 99.99th=[17695] 00:19:02.003 bw ( KiB/s): min=25240, max=25600, per=99.95%, avg=25458.00, stdev=169.38, samples=4 00:19:02.003 iops : min= 6310, max= 6400, avg=6364.50, stdev=42.34, samples=4 00:19:02.003 lat (msec) : 4=0.06%, 10=50.75%, 20=49.18% 00:19:02.003 cpu : usr=71.51%, sys=22.76%, ctx=23, majf=0, minf=6 00:19:02.004 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:19:02.004 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:02.004 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:02.004 issued rwts: total=12793,12793,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:02.004 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:02.004 00:19:02.004 Run status group 0 (all jobs): 00:19:02.004 READ: bw=24.9MiB/s (26.1MB/s), 24.9MiB/s-24.9MiB/s (26.1MB/s-26.1MB/s), io=50.0MiB (52.4MB), run=2009-2009msec 00:19:02.004 WRITE: bw=24.9MiB/s (26.1MB/s), 24.9MiB/s-24.9MiB/s (26.1MB/s-26.1MB/s), io=50.0MiB (52.4MB), run=2009-2009msec 00:19:02.004 06:11:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:19:02.262 06:11:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:19:02.519 06:11:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # ls_nested_guid=85f4be0e-eb2f-419a-99ca-b9fdfd4b0b23 00:19:02.519 06:11:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@65 -- # get_lvs_free_mb 85f4be0e-eb2f-419a-99ca-b9fdfd4b0b23 00:19:02.519 06:11:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # local lvs_uuid=85f4be0e-eb2f-419a-99ca-b9fdfd4b0b23 00:19:02.519 06:11:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1365 -- # local lvs_info 00:19:02.519 06:11:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1366 -- # local fc 00:19:02.519 06:11:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1367 -- # local cs 00:19:02.519 06:11:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:19:02.778 06:11:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:19:02.778 { 00:19:02.778 "uuid": "8003cead-9373-4f79-a944-80b098455ec4", 00:19:02.778 "name": "lvs_0", 00:19:02.778 "base_bdev": "Nvme0n1", 00:19:02.778 "total_data_clusters": 4, 00:19:02.778 "free_clusters": 0, 00:19:02.778 "block_size": 4096, 00:19:02.778 "cluster_size": 1073741824 00:19:02.778 }, 00:19:02.778 { 00:19:02.778 "uuid": "85f4be0e-eb2f-419a-99ca-b9fdfd4b0b23", 00:19:02.778 "name": "lvs_n_0", 00:19:02.778 "base_bdev": "ba7f1713-f0f0-43a8-8e70-5f16c46de889", 00:19:02.778 "total_data_clusters": 1022, 00:19:02.778 "free_clusters": 1022, 00:19:02.778 "block_size": 4096, 00:19:02.778 "cluster_size": 4194304 00:19:02.778 } 00:19:02.778 ]' 00:19:02.778 06:11:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="85f4be0e-eb2f-419a-99ca-b9fdfd4b0b23") .free_clusters' 00:19:02.778 06:11:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # fc=1022 00:19:02.778 06:11:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="85f4be0e-eb2f-419a-99ca-b9fdfd4b0b23") .cluster_size' 00:19:02.778 06:11:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # cs=4194304 00:19:02.778 06:11:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # free_mb=4088 00:19:02.778 06:11:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # echo 4088 00:19:02.778 4088 00:19:02.778 06:11:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 4088 00:19:03.035 2bb1b597-f908-4033-9cad-04de5c1d2e06 00:19:03.035 06:11:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:19:03.293 06:11:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:19:03.549 06:11:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.3 -s 4420 00:19:03.807 06:11:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@70 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:19:03.807 06:11:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:19:03.807 06:11:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:19:03.807 06:11:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:03.807 06:11:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:19:03.807 06:11:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:19:03.807 06:11:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:19:03.807 06:11:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:19:03.807 06:11:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:19:03.807 06:11:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:19:03.807 06:11:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:19:03.807 06:11:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:19:03.807 06:11:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:19:03.807 06:11:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:19:03.807 06:11:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:19:03.807 06:11:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:19:03.807 06:11:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:19:03.807 06:11:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:19:03.807 06:11:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:19:03.807 06:11:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:19:03.807 06:11:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:19:03.807 06:11:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:19:04.065 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:19:04.065 fio-3.35 00:19:04.065 Starting 1 thread 00:19:06.592 00:19:06.592 test: (groupid=0, jobs=1): err= 0: pid=89248: Tue Oct 1 06:11:31 2024 00:19:06.592 read: IOPS=5773, BW=22.6MiB/s (23.6MB/s)(45.3MiB/2009msec) 00:19:06.592 slat (nsec): min=1881, max=304278, avg=2762.21, stdev=4309.19 00:19:06.592 clat (usec): min=3137, max=21213, avg=11622.73, stdev=984.20 00:19:06.592 lat (usec): min=3147, max=21215, avg=11625.49, stdev=983.83 00:19:06.592 clat percentiles (usec): 00:19:06.592 | 1.00th=[ 9503], 5.00th=[10290], 10.00th=[10552], 20.00th=[10945], 00:19:06.592 | 30.00th=[11207], 40.00th=[11338], 50.00th=[11600], 60.00th=[11863], 00:19:06.592 | 70.00th=[11994], 80.00th=[12387], 90.00th=[12780], 95.00th=[13042], 00:19:06.592 | 99.00th=[13829], 99.50th=[14091], 99.90th=[19268], 99.95th=[20317], 00:19:06.592 | 99.99th=[21103] 00:19:06.592 bw ( KiB/s): min=22520, max=23424, per=99.82%, avg=23052.00, stdev=392.05, samples=4 00:19:06.592 iops : min= 5630, max= 5856, avg=5763.00, stdev=98.01, samples=4 00:19:06.592 write: IOPS=5757, BW=22.5MiB/s (23.6MB/s)(45.2MiB/2009msec); 0 zone resets 00:19:06.592 slat (usec): min=2, max=281, avg= 2.92, stdev= 3.53 00:19:06.592 clat (usec): min=2492, max=20261, avg=10516.17, stdev=911.28 00:19:06.592 lat (usec): min=2506, max=20263, avg=10519.09, stdev=911.06 00:19:06.592 clat percentiles (usec): 00:19:06.592 | 1.00th=[ 8586], 5.00th=[ 9241], 10.00th=[ 9503], 20.00th=[ 9896], 00:19:06.592 | 30.00th=[10028], 40.00th=[10290], 50.00th=[10552], 60.00th=[10683], 00:19:06.592 | 70.00th=[10945], 80.00th=[11207], 90.00th=[11600], 95.00th=[11863], 00:19:06.592 | 99.00th=[12518], 99.50th=[12780], 99.90th=[17695], 99.95th=[19006], 00:19:06.592 | 99.99th=[20317] 00:19:06.592 bw ( KiB/s): min=22856, max=23368, per=99.98%, avg=23026.00, stdev=232.54, samples=4 00:19:06.592 iops : min= 5714, max= 5842, avg=5756.50, stdev=58.13, samples=4 00:19:06.592 lat (msec) : 4=0.06%, 10=14.45%, 20=85.44%, 50=0.06% 00:19:06.592 cpu : usr=71.31%, sys=23.56%, ctx=4, majf=0, minf=6 00:19:06.592 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:19:06.592 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:06.592 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:06.592 issued rwts: total=11599,11567,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:06.592 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:06.592 00:19:06.592 Run status group 0 (all jobs): 00:19:06.592 READ: bw=22.6MiB/s (23.6MB/s), 22.6MiB/s-22.6MiB/s (23.6MB/s-23.6MB/s), io=45.3MiB (47.5MB), run=2009-2009msec 00:19:06.592 WRITE: bw=22.5MiB/s (23.6MB/s), 22.5MiB/s-22.5MiB/s (23.6MB/s-23.6MB/s), io=45.2MiB (47.4MB), run=2009-2009msec 00:19:06.592 06:11:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:19:06.592 06:11:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@74 -- # sync 00:19:06.592 06:11:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 bdev_lvol_delete lvs_n_0/lbd_nest_0 00:19:06.851 06:11:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:19:07.109 06:11:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:19:07.367 06:11:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:19:07.626 06:11:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:19:07.885 06:11:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:19:07.885 06:11:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:19:07.885 06:11:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:19:07.885 06:11:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@512 -- # nvmfcleanup 00:19:07.885 06:11:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:19:07.885 06:11:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:07.885 06:11:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:19:07.885 06:11:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:07.885 06:11:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:07.885 rmmod nvme_tcp 00:19:08.143 rmmod nvme_fabrics 00:19:08.143 rmmod nvme_keyring 00:19:08.143 06:11:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:08.143 06:11:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:19:08.143 06:11:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:19:08.143 06:11:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@513 -- # '[' -n 88934 ']' 00:19:08.143 06:11:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@514 -- # killprocess 88934 00:19:08.143 06:11:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@950 -- # '[' -z 88934 ']' 00:19:08.143 06:11:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # kill -0 88934 00:19:08.143 06:11:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # uname 00:19:08.143 06:11:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:08.143 06:11:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 88934 00:19:08.143 killing process with pid 88934 00:19:08.143 06:11:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:08.143 06:11:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:08.143 06:11:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@968 -- # echo 'killing process with pid 88934' 00:19:08.143 06:11:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@969 -- # kill 88934 00:19:08.143 06:11:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@974 -- # wait 88934 00:19:08.143 06:11:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:19:08.143 06:11:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:19:08.143 06:11:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:19:08.144 06:11:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:19:08.144 06:11:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@787 -- # iptables-save 00:19:08.144 06:11:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:19:08.144 06:11:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@787 -- # iptables-restore 00:19:08.144 06:11:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:08.144 06:11:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:19:08.144 06:11:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:19:08.144 06:11:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:19:08.144 06:11:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:19:08.403 06:11:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:19:08.403 06:11:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:19:08.403 06:11:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:19:08.403 06:11:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:19:08.403 06:11:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:19:08.403 06:11:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:19:08.403 06:11:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:19:08.403 06:11:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:19:08.403 06:11:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:08.403 06:11:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:08.403 06:11:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@246 -- # remove_spdk_ns 00:19:08.403 06:11:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:08.403 06:11:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:08.404 06:11:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:08.404 06:11:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@300 -- # return 0 00:19:08.404 ************************************ 00:19:08.404 END TEST nvmf_fio_host 00:19:08.404 ************************************ 00:19:08.404 00:19:08.404 real 0m19.557s 00:19:08.404 user 1m24.929s 00:19:08.404 sys 0m4.539s 00:19:08.404 06:11:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:08.404 06:11:33 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:19:08.404 06:11:34 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:19:08.404 06:11:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:19:08.404 06:11:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:08.404 06:11:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:19:08.664 ************************************ 00:19:08.664 START TEST nvmf_failover 00:19:08.664 ************************************ 00:19:08.664 06:11:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:19:08.664 * Looking for test storage... 00:19:08.664 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:08.664 06:11:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:19:08.664 06:11:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1681 -- # lcov --version 00:19:08.664 06:11:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:19:08.664 06:11:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:19:08.664 06:11:34 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:08.664 06:11:34 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:08.664 06:11:34 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:08.664 06:11:34 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:19:08.664 06:11:34 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:19:08.664 06:11:34 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:19:08.664 06:11:34 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:19:08.664 06:11:34 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:19:08.664 06:11:34 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:19:08.664 06:11:34 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:19:08.664 06:11:34 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:08.664 06:11:34 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:19:08.664 06:11:34 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:19:08.664 06:11:34 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:08.664 06:11:34 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:08.664 06:11:34 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:19:08.664 06:11:34 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:19:08.664 06:11:34 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:08.664 06:11:34 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:19:08.664 06:11:34 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:19:08.664 06:11:34 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:19:08.664 06:11:34 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:19:08.664 06:11:34 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:08.664 06:11:34 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:19:08.664 06:11:34 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:19:08.664 06:11:34 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:08.664 06:11:34 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:08.664 06:11:34 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:19:08.664 06:11:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:08.664 06:11:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:19:08.664 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:08.664 --rc genhtml_branch_coverage=1 00:19:08.664 --rc genhtml_function_coverage=1 00:19:08.664 --rc genhtml_legend=1 00:19:08.664 --rc geninfo_all_blocks=1 00:19:08.664 --rc geninfo_unexecuted_blocks=1 00:19:08.664 00:19:08.664 ' 00:19:08.664 06:11:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:19:08.664 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:08.664 --rc genhtml_branch_coverage=1 00:19:08.664 --rc genhtml_function_coverage=1 00:19:08.664 --rc genhtml_legend=1 00:19:08.664 --rc geninfo_all_blocks=1 00:19:08.664 --rc geninfo_unexecuted_blocks=1 00:19:08.664 00:19:08.664 ' 00:19:08.664 06:11:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:19:08.664 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:08.664 --rc genhtml_branch_coverage=1 00:19:08.664 --rc genhtml_function_coverage=1 00:19:08.664 --rc genhtml_legend=1 00:19:08.664 --rc geninfo_all_blocks=1 00:19:08.664 --rc geninfo_unexecuted_blocks=1 00:19:08.664 00:19:08.664 ' 00:19:08.664 06:11:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:19:08.664 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:08.664 --rc genhtml_branch_coverage=1 00:19:08.664 --rc genhtml_function_coverage=1 00:19:08.664 --rc genhtml_legend=1 00:19:08.664 --rc geninfo_all_blocks=1 00:19:08.664 --rc geninfo_unexecuted_blocks=1 00:19:08.664 00:19:08.664 ' 00:19:08.665 06:11:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:08.665 06:11:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:19:08.665 06:11:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:08.665 06:11:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:08.665 06:11:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:08.665 06:11:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:08.665 06:11:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:08.665 06:11:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:08.665 06:11:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:08.665 06:11:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:08.665 06:11:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:08.665 06:11:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:08.665 06:11:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 00:19:08.665 06:11:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=a979a798-a221-4879-b3c4-5aaa753fde06 00:19:08.665 06:11:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:08.665 06:11:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:08.665 06:11:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:08.665 06:11:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:08.665 06:11:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:08.665 06:11:34 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:19:08.665 06:11:34 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:08.665 06:11:34 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:08.665 06:11:34 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:08.665 06:11:34 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:08.665 06:11:34 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:08.665 06:11:34 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:08.665 06:11:34 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:19:08.665 06:11:34 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:08.665 06:11:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:19:08.665 06:11:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:08.665 06:11:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:08.665 06:11:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:08.665 06:11:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:08.665 06:11:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:08.665 06:11:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:08.665 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:08.665 06:11:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:08.665 06:11:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:08.665 06:11:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:08.665 06:11:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:08.665 06:11:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:08.665 06:11:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:08.665 06:11:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:08.665 06:11:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:19:08.665 06:11:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:19:08.665 06:11:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:08.665 06:11:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@472 -- # prepare_net_devs 00:19:08.665 06:11:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@434 -- # local -g is_hw=no 00:19:08.665 06:11:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@436 -- # remove_spdk_ns 00:19:08.665 06:11:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:08.665 06:11:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:08.665 06:11:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:08.665 06:11:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:19:08.665 06:11:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:19:08.665 06:11:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:19:08.665 06:11:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:19:08.665 06:11:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:19:08.665 06:11:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@456 -- # nvmf_veth_init 00:19:08.665 06:11:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:08.665 06:11:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:19:08.666 06:11:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:19:08.666 06:11:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:19:08.666 06:11:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:08.666 06:11:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:19:08.666 06:11:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:08.666 06:11:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:19:08.666 06:11:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:08.666 06:11:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:19:08.666 06:11:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:08.666 06:11:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:08.666 06:11:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:08.666 06:11:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:08.666 06:11:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:08.666 06:11:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:08.666 06:11:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:19:08.666 Cannot find device "nvmf_init_br" 00:19:08.666 06:11:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@162 -- # true 00:19:08.666 06:11:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:19:08.666 Cannot find device "nvmf_init_br2" 00:19:08.666 06:11:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@163 -- # true 00:19:08.666 06:11:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:19:08.666 Cannot find device "nvmf_tgt_br" 00:19:08.666 06:11:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@164 -- # true 00:19:08.666 06:11:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:19:08.666 Cannot find device "nvmf_tgt_br2" 00:19:08.666 06:11:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@165 -- # true 00:19:08.666 06:11:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:19:08.666 Cannot find device "nvmf_init_br" 00:19:08.666 06:11:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@166 -- # true 00:19:08.666 06:11:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:19:08.666 Cannot find device "nvmf_init_br2" 00:19:08.666 06:11:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@167 -- # true 00:19:08.666 06:11:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:19:08.926 Cannot find device "nvmf_tgt_br" 00:19:08.926 06:11:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@168 -- # true 00:19:08.926 06:11:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:19:08.926 Cannot find device "nvmf_tgt_br2" 00:19:08.926 06:11:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@169 -- # true 00:19:08.926 06:11:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:19:08.926 Cannot find device "nvmf_br" 00:19:08.926 06:11:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@170 -- # true 00:19:08.926 06:11:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:19:08.926 Cannot find device "nvmf_init_if" 00:19:08.926 06:11:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@171 -- # true 00:19:08.926 06:11:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:19:08.926 Cannot find device "nvmf_init_if2" 00:19:08.926 06:11:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@172 -- # true 00:19:08.926 06:11:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:08.926 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:08.926 06:11:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@173 -- # true 00:19:08.926 06:11:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:08.926 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:08.926 06:11:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@174 -- # true 00:19:08.926 06:11:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:19:08.926 06:11:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:08.926 06:11:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:19:08.926 06:11:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:08.926 06:11:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:08.926 06:11:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:08.926 06:11:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:08.926 06:11:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:08.926 06:11:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:19:08.926 06:11:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:19:08.926 06:11:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:19:08.926 06:11:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:19:08.926 06:11:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:19:08.926 06:11:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:19:08.926 06:11:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:19:08.926 06:11:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:19:08.926 06:11:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:19:08.926 06:11:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:08.926 06:11:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:08.926 06:11:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:08.926 06:11:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:19:08.926 06:11:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:19:08.926 06:11:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:19:08.926 06:11:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:19:09.185 06:11:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:09.185 06:11:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:09.185 06:11:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:09.185 06:11:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:19:09.185 06:11:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:19:09.185 06:11:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:19:09.185 06:11:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:09.185 06:11:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:19:09.185 06:11:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:19:09.185 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:09.185 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.057 ms 00:19:09.185 00:19:09.185 --- 10.0.0.3 ping statistics --- 00:19:09.185 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:09.185 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:19:09.185 06:11:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:19:09.185 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:19:09.185 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.068 ms 00:19:09.185 00:19:09.185 --- 10.0.0.4 ping statistics --- 00:19:09.185 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:09.185 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:19:09.185 06:11:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:09.185 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:09.185 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.020 ms 00:19:09.185 00:19:09.185 --- 10.0.0.1 ping statistics --- 00:19:09.185 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:09.185 rtt min/avg/max/mdev = 0.020/0.020/0.020/0.000 ms 00:19:09.185 06:11:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:19:09.185 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:09.185 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.076 ms 00:19:09.185 00:19:09.185 --- 10.0.0.2 ping statistics --- 00:19:09.185 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:09.185 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:19:09.185 06:11:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:09.185 06:11:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@457 -- # return 0 00:19:09.185 06:11:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:19:09.185 06:11:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:09.185 06:11:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:19:09.185 06:11:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:19:09.185 06:11:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:09.185 06:11:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:19:09.185 06:11:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:19:09.185 06:11:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:19:09.185 06:11:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:19:09.185 06:11:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:09.185 06:11:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:19:09.185 06:11:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@505 -- # nvmfpid=89535 00:19:09.185 06:11:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:19:09.185 06:11:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@506 -- # waitforlisten 89535 00:19:09.185 06:11:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 89535 ']' 00:19:09.185 06:11:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:09.185 06:11:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:09.185 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:09.185 06:11:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:09.186 06:11:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:09.186 06:11:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:19:09.186 [2024-10-01 06:11:34.690853] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:19:09.186 [2024-10-01 06:11:34.690995] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:09.445 [2024-10-01 06:11:34.826014] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:09.445 [2024-10-01 06:11:34.857670] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:09.445 [2024-10-01 06:11:34.857722] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:09.445 [2024-10-01 06:11:34.857747] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:09.445 [2024-10-01 06:11:34.857754] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:09.445 [2024-10-01 06:11:34.857760] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:09.445 [2024-10-01 06:11:34.857946] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:19:09.445 [2024-10-01 06:11:34.858066] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:19:09.445 [2024-10-01 06:11:34.858072] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:19:09.445 [2024-10-01 06:11:34.886424] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:10.380 06:11:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:10.380 06:11:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:19:10.380 06:11:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:19:10.380 06:11:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:10.380 06:11:35 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:19:10.380 06:11:35 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:10.380 06:11:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:19:10.380 [2024-10-01 06:11:35.890467] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:10.380 06:11:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:19:10.640 Malloc0 00:19:10.640 06:11:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:10.898 06:11:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:11.156 06:11:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:19:11.415 [2024-10-01 06:11:36.877111] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:11.415 06:11:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:19:11.673 [2024-10-01 06:11:37.101240] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:19:11.673 06:11:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:19:11.931 [2024-10-01 06:11:37.321441] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4422 *** 00:19:11.931 06:11:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=89587 00:19:11.931 06:11:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:19:11.931 06:11:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:11.931 06:11:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 89587 /var/tmp/bdevperf.sock 00:19:11.931 06:11:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 89587 ']' 00:19:11.931 06:11:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:11.931 06:11:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:11.931 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:11.931 06:11:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:11.931 06:11:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:11.931 06:11:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:19:12.190 06:11:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:12.190 06:11:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:19:12.190 06:11:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:19:12.449 NVMe0n1 00:19:12.449 06:11:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:19:12.707 00:19:12.707 06:11:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=89603 00:19:12.707 06:11:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:12.708 06:11:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:19:13.642 06:11:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:19:13.902 [2024-10-01 06:11:39.481742] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x166f600 is same with the state(6) to be set 00:19:13.902 [2024-10-01 06:11:39.481807] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x166f600 is same with the state(6) to be set 00:19:13.902 [2024-10-01 06:11:39.481834] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x166f600 is same with the state(6) to be set 00:19:13.902 [2024-10-01 06:11:39.481842] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x166f600 is same with the state(6) to be set 00:19:13.902 [2024-10-01 06:11:39.481849] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x166f600 is same with the state(6) to be set 00:19:13.902 [2024-10-01 06:11:39.481856] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x166f600 is same with the state(6) to be set 00:19:13.902 [2024-10-01 06:11:39.481864] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x166f600 is same with the state(6) to be set 00:19:13.902 [2024-10-01 06:11:39.481870] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x166f600 is same with the state(6) to be set 00:19:13.902 [2024-10-01 06:11:39.481878] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x166f600 is same with the state(6) to be set 00:19:13.902 [2024-10-01 06:11:39.481885] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x166f600 is same with the state(6) to be set 00:19:13.902 [2024-10-01 06:11:39.481892] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x166f600 is same with the state(6) to be set 00:19:13.902 [2024-10-01 06:11:39.481900] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x166f600 is same with the state(6) to be set 00:19:13.902 [2024-10-01 06:11:39.481906] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x166f600 is same with the state(6) to be set 00:19:13.903 [2024-10-01 06:11:39.481926] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x166f600 is same with the state(6) to be set 00:19:13.903 [2024-10-01 06:11:39.481935] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x166f600 is same with the state(6) to be set 00:19:13.903 [2024-10-01 06:11:39.481942] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x166f600 is same with the state(6) to be set 00:19:13.903 [2024-10-01 06:11:39.481949] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x166f600 is same with the state(6) to be set 00:19:13.903 [2024-10-01 06:11:39.481956] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x166f600 is same with the state(6) to be set 00:19:13.903 [2024-10-01 06:11:39.481963] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x166f600 is same with the state(6) to be set 00:19:13.903 [2024-10-01 06:11:39.481987] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x166f600 is same with the state(6) to be set 00:19:13.903 [2024-10-01 06:11:39.482010] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x166f600 is same with the state(6) to be set 00:19:13.903 [2024-10-01 06:11:39.482018] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x166f600 is same with the state(6) to be set 00:19:13.903 [2024-10-01 06:11:39.482025] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x166f600 is same with the state(6) to be set 00:19:13.903 [2024-10-01 06:11:39.482033] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x166f600 is same with the state(6) to be set 00:19:13.903 [2024-10-01 06:11:39.482040] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x166f600 is same with the state(6) to be set 00:19:13.903 [2024-10-01 06:11:39.482053] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x166f600 is same with the state(6) to be set 00:19:13.903 [2024-10-01 06:11:39.482061] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x166f600 is same with the state(6) to be set 00:19:13.903 [2024-10-01 06:11:39.482068] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x166f600 is same with the state(6) to be set 00:19:13.903 [2024-10-01 06:11:39.482076] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x166f600 is same with the state(6) to be set 00:19:13.903 [2024-10-01 06:11:39.482084] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x166f600 is same with the state(6) to be set 00:19:13.903 [2024-10-01 06:11:39.482091] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x166f600 is same with the state(6) to be set 00:19:13.903 [2024-10-01 06:11:39.482098] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x166f600 is same with the state(6) to be set 00:19:13.903 [2024-10-01 06:11:39.482106] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x166f600 is same with the state(6) to be set 00:19:13.903 [2024-10-01 06:11:39.482114] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x166f600 is same with the state(6) to be set 00:19:13.903 [2024-10-01 06:11:39.482122] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x166f600 is same with the state(6) to be set 00:19:13.903 [2024-10-01 06:11:39.482129] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x166f600 is same with the state(6) to be set 00:19:13.903 [2024-10-01 06:11:39.482137] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x166f600 is same with the state(6) to be set 00:19:13.903 [2024-10-01 06:11:39.482145] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x166f600 is same with the state(6) to be set 00:19:13.903 [2024-10-01 06:11:39.482152] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x166f600 is same with the state(6) to be set 00:19:13.903 [2024-10-01 06:11:39.482160] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x166f600 is same with the state(6) to be set 00:19:13.903 [2024-10-01 06:11:39.482168] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x166f600 is same with the state(6) to be set 00:19:13.903 [2024-10-01 06:11:39.482176] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x166f600 is same with the state(6) to be set 00:19:13.903 [2024-10-01 06:11:39.482183] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x166f600 is same with the state(6) to be set 00:19:13.903 [2024-10-01 06:11:39.482191] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x166f600 is same with the state(6) to be set 00:19:13.903 [2024-10-01 06:11:39.482199] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x166f600 is same with the state(6) to be set 00:19:13.903 [2024-10-01 06:11:39.482206] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x166f600 is same with the state(6) to be set 00:19:13.903 [2024-10-01 06:11:39.482214] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x166f600 is same with the state(6) to be set 00:19:13.903 [2024-10-01 06:11:39.482221] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x166f600 is same with the state(6) to be set 00:19:13.903 [2024-10-01 06:11:39.482228] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x166f600 is same with the state(6) to be set 00:19:13.903 [2024-10-01 06:11:39.482236] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x166f600 is same with the state(6) to be set 00:19:13.903 [2024-10-01 06:11:39.482244] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x166f600 is same with the state(6) to be set 00:19:13.903 [2024-10-01 06:11:39.482251] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x166f600 is same with the state(6) to be set 00:19:13.903 [2024-10-01 06:11:39.482259] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x166f600 is same with the state(6) to be set 00:19:13.903 [2024-10-01 06:11:39.482266] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x166f600 is same with the state(6) to be set 00:19:13.903 [2024-10-01 06:11:39.482274] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x166f600 is same with the state(6) to be set 00:19:13.903 [2024-10-01 06:11:39.482282] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x166f600 is same with the state(6) to be set 00:19:13.903 [2024-10-01 06:11:39.482290] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x166f600 is same with the state(6) to be set 00:19:13.903 [2024-10-01 06:11:39.482306] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x166f600 is same with the state(6) to be set 00:19:13.903 [2024-10-01 06:11:39.482313] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x166f600 is same with the state(6) to be set 00:19:13.903 [2024-10-01 06:11:39.482321] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x166f600 is same with the state(6) to be set 00:19:13.903 [2024-10-01 06:11:39.482328] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x166f600 is same with the state(6) to be set 00:19:13.903 [2024-10-01 06:11:39.482336] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x166f600 is same with the state(6) to be set 00:19:13.903 [2024-10-01 06:11:39.482343] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x166f600 is same with the state(6) to be set 00:19:13.903 [2024-10-01 06:11:39.482351] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x166f600 is same with the state(6) to be set 00:19:13.903 [2024-10-01 06:11:39.482359] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x166f600 is same with the state(6) to be set 00:19:13.903 [2024-10-01 06:11:39.482366] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x166f600 is same with the state(6) to be set 00:19:13.903 [2024-10-01 06:11:39.482374] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x166f600 is same with the state(6) to be set 00:19:13.903 [2024-10-01 06:11:39.482382] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x166f600 is same with the state(6) to be set 00:19:13.903 [2024-10-01 06:11:39.482389] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x166f600 is same with the state(6) to be set 00:19:13.903 [2024-10-01 06:11:39.482396] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x166f600 is same with the state(6) to be set 00:19:13.903 [2024-10-01 06:11:39.482405] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x166f600 is same with the state(6) to be set 00:19:13.903 [2024-10-01 06:11:39.482412] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x166f600 is same with the state(6) to be set 00:19:13.903 [2024-10-01 06:11:39.482420] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x166f600 is same with the state(6) to be set 00:19:13.903 [2024-10-01 06:11:39.482427] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x166f600 is same with the state(6) to be set 00:19:13.903 [2024-10-01 06:11:39.482435] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x166f600 is same with the state(6) to be set 00:19:13.903 [2024-10-01 06:11:39.482443] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x166f600 is same with the state(6) to be set 00:19:13.903 [2024-10-01 06:11:39.482450] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x166f600 is same with the state(6) to be set 00:19:13.903 [2024-10-01 06:11:39.482458] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x166f600 is same with the state(6) to be set 00:19:13.903 [2024-10-01 06:11:39.482466] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x166f600 is same with the state(6) to be set 00:19:13.903 [2024-10-01 06:11:39.482473] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x166f600 is same with the state(6) to be set 00:19:13.904 [2024-10-01 06:11:39.482481] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x166f600 is same with the state(6) to be set 00:19:13.904 [2024-10-01 06:11:39.482488] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x166f600 is same with the state(6) to be set 00:19:13.904 [2024-10-01 06:11:39.482496] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x166f600 is same with the state(6) to be set 00:19:13.904 [2024-10-01 06:11:39.482503] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x166f600 is same with the state(6) to be set 00:19:13.904 [2024-10-01 06:11:39.482510] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x166f600 is same with the state(6) to be set 00:19:13.904 [2024-10-01 06:11:39.482518] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x166f600 is same with the state(6) to be set 00:19:13.904 [2024-10-01 06:11:39.482531] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x166f600 is same with the state(6) to be set 00:19:13.904 [2024-10-01 06:11:39.482539] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x166f600 is same with the state(6) to be set 00:19:13.904 [2024-10-01 06:11:39.482547] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x166f600 is same with the state(6) to be set 00:19:13.904 [2024-10-01 06:11:39.482556] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x166f600 is same with the state(6) to be set 00:19:13.904 [2024-10-01 06:11:39.482563] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x166f600 is same with the state(6) to be set 00:19:13.904 [2024-10-01 06:11:39.482572] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x166f600 is same with the state(6) to be set 00:19:13.904 [2024-10-01 06:11:39.482579] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x166f600 is same with the state(6) to be set 00:19:13.904 [2024-10-01 06:11:39.482587] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x166f600 is same with the state(6) to be set 00:19:13.904 [2024-10-01 06:11:39.482594] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x166f600 is same with the state(6) to be set 00:19:13.904 [2024-10-01 06:11:39.482602] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x166f600 is same with the state(6) to be set 00:19:13.904 [2024-10-01 06:11:39.482610] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x166f600 is same with the state(6) to be set 00:19:13.904 [2024-10-01 06:11:39.482617] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x166f600 is same with the state(6) to be set 00:19:13.904 [2024-10-01 06:11:39.482625] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x166f600 is same with the state(6) to be set 00:19:13.904 [2024-10-01 06:11:39.482632] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x166f600 is same with the state(6) to be set 00:19:13.904 [2024-10-01 06:11:39.482639] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x166f600 is same with the state(6) to be set 00:19:13.904 [2024-10-01 06:11:39.482647] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x166f600 is same with the state(6) to be set 00:19:13.904 [2024-10-01 06:11:39.482654] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x166f600 is same with the state(6) to be set 00:19:13.904 [2024-10-01 06:11:39.482661] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x166f600 is same with the state(6) to be set 00:19:13.904 [2024-10-01 06:11:39.482669] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x166f600 is same with the state(6) to be set 00:19:13.904 [2024-10-01 06:11:39.482677] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x166f600 is same with the state(6) to be set 00:19:13.904 [2024-10-01 06:11:39.482684] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x166f600 is same with the state(6) to be set 00:19:13.904 [2024-10-01 06:11:39.482692] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x166f600 is same with the state(6) to be set 00:19:13.904 [2024-10-01 06:11:39.482699] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x166f600 is same with the state(6) to be set 00:19:13.904 06:11:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:19:17.225 06:11:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:19:17.225 00:19:17.225 06:11:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:19:17.791 06:11:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:19:21.078 06:11:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:19:21.078 [2024-10-01 06:11:46.319213] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:21.078 06:11:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:19:22.013 06:11:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:19:22.013 06:11:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 89603 00:19:28.594 { 00:19:28.594 "results": [ 00:19:28.594 { 00:19:28.594 "job": "NVMe0n1", 00:19:28.594 "core_mask": "0x1", 00:19:28.594 "workload": "verify", 00:19:28.594 "status": "finished", 00:19:28.594 "verify_range": { 00:19:28.594 "start": 0, 00:19:28.594 "length": 16384 00:19:28.594 }, 00:19:28.594 "queue_depth": 128, 00:19:28.594 "io_size": 4096, 00:19:28.594 "runtime": 15.008006, 00:19:28.594 "iops": 10209.684084614571, 00:19:28.594 "mibps": 39.88157845552567, 00:19:28.594 "io_failed": 3645, 00:19:28.594 "io_timeout": 0, 00:19:28.594 "avg_latency_us": 12217.36590192815, 00:19:28.594 "min_latency_us": 565.9927272727273, 00:19:28.594 "max_latency_us": 15609.483636363637 00:19:28.594 } 00:19:28.594 ], 00:19:28.594 "core_count": 1 00:19:28.594 } 00:19:28.594 06:11:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 89587 00:19:28.594 06:11:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 89587 ']' 00:19:28.594 06:11:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 89587 00:19:28.594 06:11:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:19:28.594 06:11:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:28.594 06:11:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 89587 00:19:28.594 killing process with pid 89587 00:19:28.594 06:11:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:28.594 06:11:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:28.594 06:11:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 89587' 00:19:28.594 06:11:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 89587 00:19:28.594 06:11:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 89587 00:19:28.594 06:11:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:19:28.594 [2024-10-01 06:11:37.383427] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:19:28.594 [2024-10-01 06:11:37.383525] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89587 ] 00:19:28.594 [2024-10-01 06:11:37.514165] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:28.594 [2024-10-01 06:11:37.551113] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:19:28.594 [2024-10-01 06:11:37.579786] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:28.594 Running I/O for 15 seconds... 00:19:28.594 7956.00 IOPS, 31.08 MiB/s [2024-10-01 06:11:39.482756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:74016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.595 [2024-10-01 06:11:39.482798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.595 [2024-10-01 06:11:39.482824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:74024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.595 [2024-10-01 06:11:39.482841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.595 [2024-10-01 06:11:39.482857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:74032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.595 [2024-10-01 06:11:39.482870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.595 [2024-10-01 06:11:39.482885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:74040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.595 [2024-10-01 06:11:39.482899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.595 [2024-10-01 06:11:39.482914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:74048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.595 [2024-10-01 06:11:39.482939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.595 [2024-10-01 06:11:39.482958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:74056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.595 [2024-10-01 06:11:39.482972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.595 [2024-10-01 06:11:39.482987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:74064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.595 [2024-10-01 06:11:39.483000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.595 [2024-10-01 06:11:39.483015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:74072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.595 [2024-10-01 06:11:39.483029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.595 [2024-10-01 06:11:39.483044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:74080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.595 [2024-10-01 06:11:39.483057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.595 [2024-10-01 06:11:39.483072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:74088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.595 [2024-10-01 06:11:39.483085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.595 [2024-10-01 06:11:39.483100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:74096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.595 [2024-10-01 06:11:39.483142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.595 [2024-10-01 06:11:39.483160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:74104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.595 [2024-10-01 06:11:39.483174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.595 [2024-10-01 06:11:39.483189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:74112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.595 [2024-10-01 06:11:39.483203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.595 [2024-10-01 06:11:39.483233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:74120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.595 [2024-10-01 06:11:39.483247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.595 [2024-10-01 06:11:39.483261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:74128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.595 [2024-10-01 06:11:39.483274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.595 [2024-10-01 06:11:39.483289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:74136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.595 [2024-10-01 06:11:39.483302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.595 [2024-10-01 06:11:39.483316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:74144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.595 [2024-10-01 06:11:39.483331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.595 [2024-10-01 06:11:39.483346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:74152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.595 [2024-10-01 06:11:39.483359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.595 [2024-10-01 06:11:39.483374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:74160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.595 [2024-10-01 06:11:39.483387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.595 [2024-10-01 06:11:39.483401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:74168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.595 [2024-10-01 06:11:39.483415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.595 [2024-10-01 06:11:39.483429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:74176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.595 [2024-10-01 06:11:39.483442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.595 [2024-10-01 06:11:39.483457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:74184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.595 [2024-10-01 06:11:39.483470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.595 [2024-10-01 06:11:39.483502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:74192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.595 [2024-10-01 06:11:39.483515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.595 [2024-10-01 06:11:39.483538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:74200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.595 [2024-10-01 06:11:39.483553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.595 [2024-10-01 06:11:39.483574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:74208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.595 [2024-10-01 06:11:39.483588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.595 [2024-10-01 06:11:39.483603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:74216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.595 [2024-10-01 06:11:39.483617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.595 [2024-10-01 06:11:39.483632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:74224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.595 [2024-10-01 06:11:39.483645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.595 [2024-10-01 06:11:39.483660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:74232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.595 [2024-10-01 06:11:39.483674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.595 [2024-10-01 06:11:39.483689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:74240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.595 [2024-10-01 06:11:39.483703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.595 [2024-10-01 06:11:39.483717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:74248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.595 [2024-10-01 06:11:39.483731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.595 [2024-10-01 06:11:39.483746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:74256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.595 [2024-10-01 06:11:39.483759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.595 [2024-10-01 06:11:39.483774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:74264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.595 [2024-10-01 06:11:39.483787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.595 [2024-10-01 06:11:39.483830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:74272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.595 [2024-10-01 06:11:39.483845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.595 [2024-10-01 06:11:39.483860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:74280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.595 [2024-10-01 06:11:39.483874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.595 [2024-10-01 06:11:39.483890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:74288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.595 [2024-10-01 06:11:39.483903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.595 [2024-10-01 06:11:39.483930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:74296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.595 [2024-10-01 06:11:39.483954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.595 [2024-10-01 06:11:39.483971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:74304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.595 [2024-10-01 06:11:39.483986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.595 [2024-10-01 06:11:39.484001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:74312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.595 [2024-10-01 06:11:39.484015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.595 [2024-10-01 06:11:39.484031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:74320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.595 [2024-10-01 06:11:39.484045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.595 [2024-10-01 06:11:39.484060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:74328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.596 [2024-10-01 06:11:39.484074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.596 [2024-10-01 06:11:39.484092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:74336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.596 [2024-10-01 06:11:39.484108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.596 [2024-10-01 06:11:39.484123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:74344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.596 [2024-10-01 06:11:39.484137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.596 [2024-10-01 06:11:39.484167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:74352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.596 [2024-10-01 06:11:39.484181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.596 [2024-10-01 06:11:39.484196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:74360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.596 [2024-10-01 06:11:39.484209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.596 [2024-10-01 06:11:39.484224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:74368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.596 [2024-10-01 06:11:39.484238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.596 [2024-10-01 06:11:39.484253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:74376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.596 [2024-10-01 06:11:39.484279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.596 [2024-10-01 06:11:39.484300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:74384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.596 [2024-10-01 06:11:39.484315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.596 [2024-10-01 06:11:39.484330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:74392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.596 [2024-10-01 06:11:39.484347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.596 [2024-10-01 06:11:39.484363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:74400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.596 [2024-10-01 06:11:39.484384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.596 [2024-10-01 06:11:39.484400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:74408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.596 [2024-10-01 06:11:39.484414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.596 [2024-10-01 06:11:39.484429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:74416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.596 [2024-10-01 06:11:39.484442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.596 [2024-10-01 06:11:39.484458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:74424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.596 [2024-10-01 06:11:39.484471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.596 [2024-10-01 06:11:39.484486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:74432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.596 [2024-10-01 06:11:39.484499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.596 [2024-10-01 06:11:39.484514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:74440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.596 [2024-10-01 06:11:39.484528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.596 [2024-10-01 06:11:39.484542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:74448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.596 [2024-10-01 06:11:39.484556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.596 [2024-10-01 06:11:39.484571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:74456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.596 [2024-10-01 06:11:39.484585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.596 [2024-10-01 06:11:39.484602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:74464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.596 [2024-10-01 06:11:39.484616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.596 [2024-10-01 06:11:39.484631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:74472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.596 [2024-10-01 06:11:39.484645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.596 [2024-10-01 06:11:39.484660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:74480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.596 [2024-10-01 06:11:39.484673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.596 [2024-10-01 06:11:39.484688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:74488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.596 [2024-10-01 06:11:39.484701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.596 [2024-10-01 06:11:39.484716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:74496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.596 [2024-10-01 06:11:39.484729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.596 [2024-10-01 06:11:39.484752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:74504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.596 [2024-10-01 06:11:39.484772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.596 [2024-10-01 06:11:39.484789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:74512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.596 [2024-10-01 06:11:39.484807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.596 [2024-10-01 06:11:39.484824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:74520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.596 [2024-10-01 06:11:39.484838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.596 [2024-10-01 06:11:39.484853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:74528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.596 [2024-10-01 06:11:39.484866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.596 [2024-10-01 06:11:39.484882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:74536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.596 [2024-10-01 06:11:39.484895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.596 [2024-10-01 06:11:39.484924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:74544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.596 [2024-10-01 06:11:39.484938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.596 [2024-10-01 06:11:39.484953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:74552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.596 [2024-10-01 06:11:39.484967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.596 [2024-10-01 06:11:39.484981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:74560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.596 [2024-10-01 06:11:39.484994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.596 [2024-10-01 06:11:39.485010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:74568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.596 [2024-10-01 06:11:39.485023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.596 [2024-10-01 06:11:39.485039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:74576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.596 [2024-10-01 06:11:39.485052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.596 [2024-10-01 06:11:39.485067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:74584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.596 [2024-10-01 06:11:39.485080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.596 [2024-10-01 06:11:39.485098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:74592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.596 [2024-10-01 06:11:39.485112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.596 [2024-10-01 06:11:39.485127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:74600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.596 [2024-10-01 06:11:39.485148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.596 [2024-10-01 06:11:39.485164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:74608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.596 [2024-10-01 06:11:39.485178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.596 [2024-10-01 06:11:39.485193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:74616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.596 [2024-10-01 06:11:39.485207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.596 [2024-10-01 06:11:39.485221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:74624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.596 [2024-10-01 06:11:39.485234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.596 [2024-10-01 06:11:39.485250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:74632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.596 [2024-10-01 06:11:39.485266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.596 [2024-10-01 06:11:39.485282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:74640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.596 [2024-10-01 06:11:39.485295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.597 [2024-10-01 06:11:39.485310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:74648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.597 [2024-10-01 06:11:39.485323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.597 [2024-10-01 06:11:39.485338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:74656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.597 [2024-10-01 06:11:39.485352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.597 [2024-10-01 06:11:39.485367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:74664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.597 [2024-10-01 06:11:39.485380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.597 [2024-10-01 06:11:39.485396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:74672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.597 [2024-10-01 06:11:39.485409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.597 [2024-10-01 06:11:39.485425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:74680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.597 [2024-10-01 06:11:39.485438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.597 [2024-10-01 06:11:39.485453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:74688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.597 [2024-10-01 06:11:39.485466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.597 [2024-10-01 06:11:39.485481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:74696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.597 [2024-10-01 06:11:39.485494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.597 [2024-10-01 06:11:39.485516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:74704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.597 [2024-10-01 06:11:39.485530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.597 [2024-10-01 06:11:39.485545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:74712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.597 [2024-10-01 06:11:39.485559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.597 [2024-10-01 06:11:39.485576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:74720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.597 [2024-10-01 06:11:39.485590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.597 [2024-10-01 06:11:39.485605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:74728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.597 [2024-10-01 06:11:39.485618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.597 [2024-10-01 06:11:39.485634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:74736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.597 [2024-10-01 06:11:39.485648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.597 [2024-10-01 06:11:39.485663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:74744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.597 [2024-10-01 06:11:39.485676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.597 [2024-10-01 06:11:39.485691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:74752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.597 [2024-10-01 06:11:39.485704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.597 [2024-10-01 06:11:39.485720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:74760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.597 [2024-10-01 06:11:39.485734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.597 [2024-10-01 06:11:39.485751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:74768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.597 [2024-10-01 06:11:39.485764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.597 [2024-10-01 06:11:39.485779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:74776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.597 [2024-10-01 06:11:39.485793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.597 [2024-10-01 06:11:39.485807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:74784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.597 [2024-10-01 06:11:39.485821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.597 [2024-10-01 06:11:39.485836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:74792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.597 [2024-10-01 06:11:39.485849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.597 [2024-10-01 06:11:39.485864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:74800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.597 [2024-10-01 06:11:39.485884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.597 [2024-10-01 06:11:39.485910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:74808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.597 [2024-10-01 06:11:39.485926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.597 [2024-10-01 06:11:39.485959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:74816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.597 [2024-10-01 06:11:39.485975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.597 [2024-10-01 06:11:39.485990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:74824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.597 [2024-10-01 06:11:39.486004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.597 [2024-10-01 06:11:39.486019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:74832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.597 [2024-10-01 06:11:39.486032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.597 [2024-10-01 06:11:39.486047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:74840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.597 [2024-10-01 06:11:39.486061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.597 [2024-10-01 06:11:39.486078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:74848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.597 [2024-10-01 06:11:39.486092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.597 [2024-10-01 06:11:39.486107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:74856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.597 [2024-10-01 06:11:39.486120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.597 [2024-10-01 06:11:39.486135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:74864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.597 [2024-10-01 06:11:39.486149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.597 [2024-10-01 06:11:39.486163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:74872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.597 [2024-10-01 06:11:39.486177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.597 [2024-10-01 06:11:39.486193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:74880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.597 [2024-10-01 06:11:39.486206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.597 [2024-10-01 06:11:39.486221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:74888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.597 [2024-10-01 06:11:39.486236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.597 [2024-10-01 06:11:39.486252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:74896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.597 [2024-10-01 06:11:39.486266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.597 [2024-10-01 06:11:39.486281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:74920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.597 [2024-10-01 06:11:39.486303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.597 [2024-10-01 06:11:39.486319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:74928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.597 [2024-10-01 06:11:39.486333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.597 [2024-10-01 06:11:39.486348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:74936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.597 [2024-10-01 06:11:39.486361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.597 [2024-10-01 06:11:39.486377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:74944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.597 [2024-10-01 06:11:39.486390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.597 [2024-10-01 06:11:39.486405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:74952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.597 [2024-10-01 06:11:39.486418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.597 [2024-10-01 06:11:39.486433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:74960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.597 [2024-10-01 06:11:39.486446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.597 [2024-10-01 06:11:39.486461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:74968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.598 [2024-10-01 06:11:39.486475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.598 [2024-10-01 06:11:39.486489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:74976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.598 [2024-10-01 06:11:39.486503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.598 [2024-10-01 06:11:39.486518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:74984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.598 [2024-10-01 06:11:39.486532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.598 [2024-10-01 06:11:39.486549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:74992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.598 [2024-10-01 06:11:39.486563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.598 [2024-10-01 06:11:39.486578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:75000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.598 [2024-10-01 06:11:39.486591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.598 [2024-10-01 06:11:39.486607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:75008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.598 [2024-10-01 06:11:39.486620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.598 [2024-10-01 06:11:39.486635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:75016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.598 [2024-10-01 06:11:39.486648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.598 [2024-10-01 06:11:39.486670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:75024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.598 [2024-10-01 06:11:39.486684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.598 [2024-10-01 06:11:39.486699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:75032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.598 [2024-10-01 06:11:39.486714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.598 [2024-10-01 06:11:39.486730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:74904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.598 [2024-10-01 06:11:39.486744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.598 [2024-10-01 06:11:39.486758] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7b540 is same with the state(6) to be set 00:19:28.598 [2024-10-01 06:11:39.486775] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:28.598 [2024-10-01 06:11:39.486785] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:28.598 [2024-10-01 06:11:39.486796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:74912 len:8 PRP1 0x0 PRP2 0x0 00:19:28.598 [2024-10-01 06:11:39.486809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.598 [2024-10-01 06:11:39.486853] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xa7b540 was disconnected and freed. reset controller. 00:19:28.598 [2024-10-01 06:11:39.486871] bdev_nvme.c:1987:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.3:4420 to 10.0.0.3:4421 00:19:28.598 [2024-10-01 06:11:39.486937] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:28.598 [2024-10-01 06:11:39.486960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.598 [2024-10-01 06:11:39.486976] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:28.598 [2024-10-01 06:11:39.486989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.598 [2024-10-01 06:11:39.487003] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:28.598 [2024-10-01 06:11:39.487015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.598 [2024-10-01 06:11:39.487029] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:28.598 [2024-10-01 06:11:39.487043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.598 [2024-10-01 06:11:39.487055] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:28.598 [2024-10-01 06:11:39.490723] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:28.598 [2024-10-01 06:11:39.490760] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5af10 (9): Bad file descriptor 00:19:28.598 [2024-10-01 06:11:39.527534] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:28.598 8729.50 IOPS, 34.10 MiB/s 9310.33 IOPS, 36.37 MiB/s 9593.25 IOPS, 37.47 MiB/s [2024-10-01 06:11:43.087923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:119112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.598 [2024-10-01 06:11:43.088020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.598 [2024-10-01 06:11:43.088068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:119120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.598 [2024-10-01 06:11:43.088084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.598 [2024-10-01 06:11:43.088100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:119128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.598 [2024-10-01 06:11:43.088114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.598 [2024-10-01 06:11:43.088144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:119136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.598 [2024-10-01 06:11:43.088172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.598 [2024-10-01 06:11:43.088186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:119144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.598 [2024-10-01 06:11:43.088213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.598 [2024-10-01 06:11:43.088227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:119152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.598 [2024-10-01 06:11:43.088239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.598 [2024-10-01 06:11:43.088253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:119160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.598 [2024-10-01 06:11:43.088265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.598 [2024-10-01 06:11:43.088279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:119168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.598 [2024-10-01 06:11:43.088291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.598 [2024-10-01 06:11:43.088305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:118536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.598 [2024-10-01 06:11:43.088318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.598 [2024-10-01 06:11:43.088332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:118544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.598 [2024-10-01 06:11:43.088345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.598 [2024-10-01 06:11:43.088359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:118552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.598 [2024-10-01 06:11:43.088371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.598 [2024-10-01 06:11:43.088385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:118560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.598 [2024-10-01 06:11:43.088397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.598 [2024-10-01 06:11:43.088411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:118568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.598 [2024-10-01 06:11:43.088424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.598 [2024-10-01 06:11:43.088447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:118576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.598 [2024-10-01 06:11:43.088461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.598 [2024-10-01 06:11:43.088475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:118584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.598 [2024-10-01 06:11:43.088486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.598 [2024-10-01 06:11:43.088500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:118592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.598 [2024-10-01 06:11:43.088513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.598 [2024-10-01 06:11:43.088527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:118600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.598 [2024-10-01 06:11:43.088539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.598 [2024-10-01 06:11:43.088554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:118608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.598 [2024-10-01 06:11:43.088566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.598 [2024-10-01 06:11:43.088580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:118616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.598 [2024-10-01 06:11:43.088592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.598 [2024-10-01 06:11:43.088606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:118624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.598 [2024-10-01 06:11:43.088618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.598 [2024-10-01 06:11:43.088632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:118632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.599 [2024-10-01 06:11:43.088644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.599 [2024-10-01 06:11:43.088658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:118640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.599 [2024-10-01 06:11:43.088669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.599 [2024-10-01 06:11:43.088683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:118648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.599 [2024-10-01 06:11:43.088695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.599 [2024-10-01 06:11:43.088709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:118656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.599 [2024-10-01 06:11:43.088721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.599 [2024-10-01 06:11:43.088735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:119176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.599 [2024-10-01 06:11:43.088747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.599 [2024-10-01 06:11:43.088761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:119184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.599 [2024-10-01 06:11:43.088774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.599 [2024-10-01 06:11:43.088795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:119192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.599 [2024-10-01 06:11:43.088808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.599 [2024-10-01 06:11:43.088821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:119200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.599 [2024-10-01 06:11:43.088834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.599 [2024-10-01 06:11:43.088847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:119208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.599 [2024-10-01 06:11:43.088860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.599 [2024-10-01 06:11:43.088874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:119216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.599 [2024-10-01 06:11:43.088886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.599 [2024-10-01 06:11:43.088900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:119224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.599 [2024-10-01 06:11:43.088928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.599 [2024-10-01 06:11:43.088942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:119232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.599 [2024-10-01 06:11:43.088955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.599 [2024-10-01 06:11:43.088979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:118664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.599 [2024-10-01 06:11:43.088995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.599 [2024-10-01 06:11:43.089028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:118672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.599 [2024-10-01 06:11:43.089041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.599 [2024-10-01 06:11:43.089056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:118680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.599 [2024-10-01 06:11:43.089069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.599 [2024-10-01 06:11:43.089083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:118688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.599 [2024-10-01 06:11:43.089097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.599 [2024-10-01 06:11:43.089111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:118696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.599 [2024-10-01 06:11:43.089125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.599 [2024-10-01 06:11:43.089139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:118704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.599 [2024-10-01 06:11:43.089152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.599 [2024-10-01 06:11:43.089166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:118712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.599 [2024-10-01 06:11:43.089186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.599 [2024-10-01 06:11:43.089202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:118720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.599 [2024-10-01 06:11:43.089215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.599 [2024-10-01 06:11:43.089229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:118728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.599 [2024-10-01 06:11:43.089242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.599 [2024-10-01 06:11:43.089258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:118736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.599 [2024-10-01 06:11:43.089271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.599 [2024-10-01 06:11:43.089285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:118744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.599 [2024-10-01 06:11:43.089299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.599 [2024-10-01 06:11:43.089314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:118752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.599 [2024-10-01 06:11:43.089343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.599 [2024-10-01 06:11:43.089357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:118760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.599 [2024-10-01 06:11:43.089370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.599 [2024-10-01 06:11:43.089385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:118768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.599 [2024-10-01 06:11:43.089397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.599 [2024-10-01 06:11:43.089411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:118776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.599 [2024-10-01 06:11:43.089424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.599 [2024-10-01 06:11:43.089438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:118784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.599 [2024-10-01 06:11:43.089451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.599 [2024-10-01 06:11:43.089465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:118792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.599 [2024-10-01 06:11:43.089477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.599 [2024-10-01 06:11:43.089491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:118800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.599 [2024-10-01 06:11:43.089505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.599 [2024-10-01 06:11:43.089519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:118808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.599 [2024-10-01 06:11:43.089532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.599 [2024-10-01 06:11:43.089553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:118816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.599 [2024-10-01 06:11:43.089567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.599 [2024-10-01 06:11:43.089581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:118824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.599 [2024-10-01 06:11:43.089594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.599 [2024-10-01 06:11:43.089607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:118832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.600 [2024-10-01 06:11:43.089620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.600 [2024-10-01 06:11:43.089634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:118840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.600 [2024-10-01 06:11:43.089647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.600 [2024-10-01 06:11:43.089661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:118848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.600 [2024-10-01 06:11:43.089674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.600 [2024-10-01 06:11:43.089688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:119240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.600 [2024-10-01 06:11:43.089700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.600 [2024-10-01 06:11:43.089714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:119248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.600 [2024-10-01 06:11:43.089726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.600 [2024-10-01 06:11:43.089741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:119256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.600 [2024-10-01 06:11:43.089753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.600 [2024-10-01 06:11:43.089768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:119264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.600 [2024-10-01 06:11:43.089781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.600 [2024-10-01 06:11:43.089795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:119272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.600 [2024-10-01 06:11:43.089808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.600 [2024-10-01 06:11:43.089822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:119280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.600 [2024-10-01 06:11:43.089835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.600 [2024-10-01 06:11:43.089849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:119288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.600 [2024-10-01 06:11:43.089861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.600 [2024-10-01 06:11:43.089876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:119296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.600 [2024-10-01 06:11:43.089894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.600 [2024-10-01 06:11:43.089909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:119304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.600 [2024-10-01 06:11:43.089922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.600 [2024-10-01 06:11:43.089962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:119312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.600 [2024-10-01 06:11:43.089976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.600 [2024-10-01 06:11:43.089991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:119320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.600 [2024-10-01 06:11:43.090004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.600 [2024-10-01 06:11:43.090018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:119328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.600 [2024-10-01 06:11:43.090031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.600 [2024-10-01 06:11:43.090046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:119336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.600 [2024-10-01 06:11:43.090058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.600 [2024-10-01 06:11:43.090073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:119344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.600 [2024-10-01 06:11:43.090085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.600 [2024-10-01 06:11:43.090100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:119352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.600 [2024-10-01 06:11:43.090113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.600 [2024-10-01 06:11:43.090127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:119360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.600 [2024-10-01 06:11:43.090140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.600 [2024-10-01 06:11:43.090155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:118856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.600 [2024-10-01 06:11:43.090168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.600 [2024-10-01 06:11:43.090182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:118864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.600 [2024-10-01 06:11:43.090195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.600 [2024-10-01 06:11:43.090210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:118872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.600 [2024-10-01 06:11:43.090222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.600 [2024-10-01 06:11:43.090238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:118880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.600 [2024-10-01 06:11:43.090251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.600 [2024-10-01 06:11:43.090272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:118888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.600 [2024-10-01 06:11:43.090287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.600 [2024-10-01 06:11:43.090303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:118896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.600 [2024-10-01 06:11:43.090331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.600 [2024-10-01 06:11:43.090345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:118904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.600 [2024-10-01 06:11:43.090358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.600 [2024-10-01 06:11:43.090373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:118912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.600 [2024-10-01 06:11:43.090385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.600 [2024-10-01 06:11:43.090399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:118920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.600 [2024-10-01 06:11:43.090412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.600 [2024-10-01 06:11:43.090426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:118928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.600 [2024-10-01 06:11:43.090440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.600 [2024-10-01 06:11:43.090454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:118936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.600 [2024-10-01 06:11:43.090467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.600 [2024-10-01 06:11:43.090481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:118944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.600 [2024-10-01 06:11:43.090493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.600 [2024-10-01 06:11:43.090507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:118952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.600 [2024-10-01 06:11:43.090520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.600 [2024-10-01 06:11:43.090534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:118960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.600 [2024-10-01 06:11:43.090547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.600 [2024-10-01 06:11:43.090561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:118968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.600 [2024-10-01 06:11:43.090573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.600 [2024-10-01 06:11:43.090587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:118976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.600 [2024-10-01 06:11:43.090600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.600 [2024-10-01 06:11:43.090614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:119368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.600 [2024-10-01 06:11:43.090632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.600 [2024-10-01 06:11:43.090647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:119376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.600 [2024-10-01 06:11:43.090660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.600 [2024-10-01 06:11:43.090675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:119384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.600 [2024-10-01 06:11:43.090687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.600 [2024-10-01 06:11:43.090701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:119392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.600 [2024-10-01 06:11:43.090714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.600 [2024-10-01 06:11:43.090728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:119400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.601 [2024-10-01 06:11:43.090741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.601 [2024-10-01 06:11:43.090755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:119408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.601 [2024-10-01 06:11:43.090768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.601 [2024-10-01 06:11:43.090782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:119416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.601 [2024-10-01 06:11:43.090795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.601 [2024-10-01 06:11:43.090809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:119424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.601 [2024-10-01 06:11:43.090822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.601 [2024-10-01 06:11:43.090836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:119432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.601 [2024-10-01 06:11:43.090849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.601 [2024-10-01 06:11:43.090863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:119440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.601 [2024-10-01 06:11:43.090875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.601 [2024-10-01 06:11:43.090889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:119448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.601 [2024-10-01 06:11:43.090902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.601 [2024-10-01 06:11:43.090916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:119456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.601 [2024-10-01 06:11:43.090940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.601 [2024-10-01 06:11:43.090957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:119464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.601 [2024-10-01 06:11:43.090969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.601 [2024-10-01 06:11:43.090983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:119472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.601 [2024-10-01 06:11:43.091004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.601 [2024-10-01 06:11:43.091019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:119480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.601 [2024-10-01 06:11:43.091032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.601 [2024-10-01 06:11:43.091046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:119488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.601 [2024-10-01 06:11:43.091059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.601 [2024-10-01 06:11:43.091073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:118984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.601 [2024-10-01 06:11:43.091085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.601 [2024-10-01 06:11:43.091099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:118992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.601 [2024-10-01 06:11:43.091116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.601 [2024-10-01 06:11:43.091131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:119000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.601 [2024-10-01 06:11:43.091144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.601 [2024-10-01 06:11:43.091159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:119008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.601 [2024-10-01 06:11:43.091171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.601 [2024-10-01 06:11:43.091186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:119016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.601 [2024-10-01 06:11:43.091198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.601 [2024-10-01 06:11:43.091212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:119024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.601 [2024-10-01 06:11:43.091225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.601 [2024-10-01 06:11:43.091239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:119032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.601 [2024-10-01 06:11:43.091251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.601 [2024-10-01 06:11:43.091265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:119040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.601 [2024-10-01 06:11:43.091278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.601 [2024-10-01 06:11:43.091292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:119048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.601 [2024-10-01 06:11:43.091304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.601 [2024-10-01 06:11:43.091319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:119056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.601 [2024-10-01 06:11:43.091331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.601 [2024-10-01 06:11:43.091355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:119064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.601 [2024-10-01 06:11:43.091373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.601 [2024-10-01 06:11:43.091389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:119072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.601 [2024-10-01 06:11:43.091402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.601 [2024-10-01 06:11:43.091416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:119080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.601 [2024-10-01 06:11:43.091429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.601 [2024-10-01 06:11:43.091443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:119088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.601 [2024-10-01 06:11:43.091456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.601 [2024-10-01 06:11:43.091471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:119096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.601 [2024-10-01 06:11:43.091483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.601 [2024-10-01 06:11:43.091496] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7ebb0 is same with the state(6) to be set 00:19:28.601 [2024-10-01 06:11:43.091512] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:28.601 [2024-10-01 06:11:43.091522] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:28.601 [2024-10-01 06:11:43.091532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:119104 len:8 PRP1 0x0 PRP2 0x0 00:19:28.601 [2024-10-01 06:11:43.091545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.601 [2024-10-01 06:11:43.091559] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:28.601 [2024-10-01 06:11:43.091568] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:28.601 [2024-10-01 06:11:43.091578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:119496 len:8 PRP1 0x0 PRP2 0x0 00:19:28.601 [2024-10-01 06:11:43.091590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.601 [2024-10-01 06:11:43.091603] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:28.601 [2024-10-01 06:11:43.091612] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:28.601 [2024-10-01 06:11:43.091622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:119504 len:8 PRP1 0x0 PRP2 0x0 00:19:28.601 [2024-10-01 06:11:43.091634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.601 [2024-10-01 06:11:43.091646] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:28.601 [2024-10-01 06:11:43.091655] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:28.601 [2024-10-01 06:11:43.091665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:119512 len:8 PRP1 0x0 PRP2 0x0 00:19:28.601 [2024-10-01 06:11:43.091677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.601 [2024-10-01 06:11:43.091689] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:28.601 [2024-10-01 06:11:43.091704] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:28.601 [2024-10-01 06:11:43.091715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:119520 len:8 PRP1 0x0 PRP2 0x0 00:19:28.601 [2024-10-01 06:11:43.091727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.601 [2024-10-01 06:11:43.091740] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:28.601 [2024-10-01 06:11:43.091749] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:28.601 [2024-10-01 06:11:43.091760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:119528 len:8 PRP1 0x0 PRP2 0x0 00:19:28.601 [2024-10-01 06:11:43.091773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.601 [2024-10-01 06:11:43.091786] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:28.601 [2024-10-01 06:11:43.091795] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:28.601 [2024-10-01 06:11:43.091831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:119536 len:8 PRP1 0x0 PRP2 0x0 00:19:28.601 [2024-10-01 06:11:43.091846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.602 [2024-10-01 06:11:43.091860] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:28.602 [2024-10-01 06:11:43.091870] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:28.602 [2024-10-01 06:11:43.091880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:119544 len:8 PRP1 0x0 PRP2 0x0 00:19:28.602 [2024-10-01 06:11:43.091894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.602 [2024-10-01 06:11:43.091907] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:28.602 [2024-10-01 06:11:43.091928] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:28.602 [2024-10-01 06:11:43.091940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:119552 len:8 PRP1 0x0 PRP2 0x0 00:19:28.602 [2024-10-01 06:11:43.091953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.602 [2024-10-01 06:11:43.091999] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xa7ebb0 was disconnected and freed. reset controller. 00:19:28.602 [2024-10-01 06:11:43.092017] bdev_nvme.c:1987:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.3:4421 to 10.0.0.3:4422 00:19:28.602 [2024-10-01 06:11:43.092071] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:28.602 [2024-10-01 06:11:43.092093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.602 [2024-10-01 06:11:43.092109] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:28.602 [2024-10-01 06:11:43.092123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.602 [2024-10-01 06:11:43.092167] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:28.602 [2024-10-01 06:11:43.092180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.602 [2024-10-01 06:11:43.092209] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:28.602 [2024-10-01 06:11:43.092221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.602 [2024-10-01 06:11:43.092254] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:28.602 [2024-10-01 06:11:43.092287] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5af10 (9): Bad file descriptor 00:19:28.602 [2024-10-01 06:11:43.096112] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:28.602 [2024-10-01 06:11:43.134875] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:28.602 9667.40 IOPS, 37.76 MiB/s 9817.33 IOPS, 38.35 MiB/s 9922.29 IOPS, 38.76 MiB/s 9999.38 IOPS, 39.06 MiB/s 10054.56 IOPS, 39.28 MiB/s [2024-10-01 06:11:47.604071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:113072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.602 [2024-10-01 06:11:47.604154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.602 [2024-10-01 06:11:47.604212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:113080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.602 [2024-10-01 06:11:47.604241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.602 [2024-10-01 06:11:47.604256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:113088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.602 [2024-10-01 06:11:47.604284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.602 [2024-10-01 06:11:47.604297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:113096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.602 [2024-10-01 06:11:47.604309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.602 [2024-10-01 06:11:47.604323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:113104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.602 [2024-10-01 06:11:47.604335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.602 [2024-10-01 06:11:47.604348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:113112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.602 [2024-10-01 06:11:47.604360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.602 [2024-10-01 06:11:47.604374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:113120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.602 [2024-10-01 06:11:47.604385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.602 [2024-10-01 06:11:47.604399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:113128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.602 [2024-10-01 06:11:47.604411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.602 [2024-10-01 06:11:47.604425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:113136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.602 [2024-10-01 06:11:47.604437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.602 [2024-10-01 06:11:47.604451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:113144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.602 [2024-10-01 06:11:47.604463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.602 [2024-10-01 06:11:47.604476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:113152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.602 [2024-10-01 06:11:47.604511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.602 [2024-10-01 06:11:47.604527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:113160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.602 [2024-10-01 06:11:47.604539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.602 [2024-10-01 06:11:47.604553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:113168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.602 [2024-10-01 06:11:47.604565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.602 [2024-10-01 06:11:47.604579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:113176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.602 [2024-10-01 06:11:47.604591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.602 [2024-10-01 06:11:47.604605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:113184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.602 [2024-10-01 06:11:47.604617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.602 [2024-10-01 06:11:47.604630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:113192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.602 [2024-10-01 06:11:47.604642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.602 [2024-10-01 06:11:47.604656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:112560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.602 [2024-10-01 06:11:47.604668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.602 [2024-10-01 06:11:47.604683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:112568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.602 [2024-10-01 06:11:47.604695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.602 [2024-10-01 06:11:47.604709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:112576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.602 [2024-10-01 06:11:47.604721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.602 [2024-10-01 06:11:47.604734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:112584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.602 [2024-10-01 06:11:47.604746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.602 [2024-10-01 06:11:47.604759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:112592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.602 [2024-10-01 06:11:47.604772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.602 [2024-10-01 06:11:47.604785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:112600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.602 [2024-10-01 06:11:47.604797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.602 [2024-10-01 06:11:47.604810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:112608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.602 [2024-10-01 06:11:47.604822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.602 [2024-10-01 06:11:47.604846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:112616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.602 [2024-10-01 06:11:47.604859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.602 [2024-10-01 06:11:47.604872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:112624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.602 [2024-10-01 06:11:47.604884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.602 [2024-10-01 06:11:47.604898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:112632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.602 [2024-10-01 06:11:47.604910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.602 [2024-10-01 06:11:47.604923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:112640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.602 [2024-10-01 06:11:47.604935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.602 [2024-10-01 06:11:47.604949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:112648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.602 [2024-10-01 06:11:47.604976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.602 [2024-10-01 06:11:47.604992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:112656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.602 [2024-10-01 06:11:47.605004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.602 [2024-10-01 06:11:47.605017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:112664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.603 [2024-10-01 06:11:47.605030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.603 [2024-10-01 06:11:47.605043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:112672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.603 [2024-10-01 06:11:47.605055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.603 [2024-10-01 06:11:47.605068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:112680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.603 [2024-10-01 06:11:47.605080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.603 [2024-10-01 06:11:47.605094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:112688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.603 [2024-10-01 06:11:47.605107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.603 [2024-10-01 06:11:47.605121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:112696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.603 [2024-10-01 06:11:47.605134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.603 [2024-10-01 06:11:47.605148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:112704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.603 [2024-10-01 06:11:47.605161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.603 [2024-10-01 06:11:47.605174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:112712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.603 [2024-10-01 06:11:47.605194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.603 [2024-10-01 06:11:47.605209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:112720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.603 [2024-10-01 06:11:47.605221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.603 [2024-10-01 06:11:47.605235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:112728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.603 [2024-10-01 06:11:47.605247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.603 [2024-10-01 06:11:47.605260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:112736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.603 [2024-10-01 06:11:47.605272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.603 [2024-10-01 06:11:47.605286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:112744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.603 [2024-10-01 06:11:47.605298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.603 [2024-10-01 06:11:47.605312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:113200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.603 [2024-10-01 06:11:47.605324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.603 [2024-10-01 06:11:47.605337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:113208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.603 [2024-10-01 06:11:47.605349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.603 [2024-10-01 06:11:47.605363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:113216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.603 [2024-10-01 06:11:47.605375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.603 [2024-10-01 06:11:47.605389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:113224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.603 [2024-10-01 06:11:47.605401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.603 [2024-10-01 06:11:47.605414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:113232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.603 [2024-10-01 06:11:47.605426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.603 [2024-10-01 06:11:47.605440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:113240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.603 [2024-10-01 06:11:47.605452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.603 [2024-10-01 06:11:47.605466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:113248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.603 [2024-10-01 06:11:47.605478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.603 [2024-10-01 06:11:47.605492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:113256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.603 [2024-10-01 06:11:47.605504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.603 [2024-10-01 06:11:47.605524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:113264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.603 [2024-10-01 06:11:47.605538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.603 [2024-10-01 06:11:47.605552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:113272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.603 [2024-10-01 06:11:47.605565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.603 [2024-10-01 06:11:47.605579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:113280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.603 [2024-10-01 06:11:47.605592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.603 [2024-10-01 06:11:47.605605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:113288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.603 [2024-10-01 06:11:47.605617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.603 [2024-10-01 06:11:47.605631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:113296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.603 [2024-10-01 06:11:47.605643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.603 [2024-10-01 06:11:47.605657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:113304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.603 [2024-10-01 06:11:47.605668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.603 [2024-10-01 06:11:47.605682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:112752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.603 [2024-10-01 06:11:47.605694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.603 [2024-10-01 06:11:47.605708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:112760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.603 [2024-10-01 06:11:47.605720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.603 [2024-10-01 06:11:47.605734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:112768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.603 [2024-10-01 06:11:47.605746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.603 [2024-10-01 06:11:47.605760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:112776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.603 [2024-10-01 06:11:47.605772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.603 [2024-10-01 06:11:47.605786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:112784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.603 [2024-10-01 06:11:47.605798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.603 [2024-10-01 06:11:47.605811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:112792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.603 [2024-10-01 06:11:47.605823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.603 [2024-10-01 06:11:47.605837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:112800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.603 [2024-10-01 06:11:47.605849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.603 [2024-10-01 06:11:47.605869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:112808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.603 [2024-10-01 06:11:47.605882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.603 [2024-10-01 06:11:47.605905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:112816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.603 [2024-10-01 06:11:47.605921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.603 [2024-10-01 06:11:47.605935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:112824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.604 [2024-10-01 06:11:47.605947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.604 [2024-10-01 06:11:47.605960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:112832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.604 [2024-10-01 06:11:47.605972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.604 [2024-10-01 06:11:47.605987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:112840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.604 [2024-10-01 06:11:47.606000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.604 [2024-10-01 06:11:47.606014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:112848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.604 [2024-10-01 06:11:47.606026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.604 [2024-10-01 06:11:47.606040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:112856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.604 [2024-10-01 06:11:47.606052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.604 [2024-10-01 06:11:47.606066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:112864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.604 [2024-10-01 06:11:47.606079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.604 [2024-10-01 06:11:47.606092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:112872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.604 [2024-10-01 06:11:47.606104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.604 [2024-10-01 06:11:47.606117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:113312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.604 [2024-10-01 06:11:47.606130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.604 [2024-10-01 06:11:47.606144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:113320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.604 [2024-10-01 06:11:47.606156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.604 [2024-10-01 06:11:47.606169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:113328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.604 [2024-10-01 06:11:47.606181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.604 [2024-10-01 06:11:47.606196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:113336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.604 [2024-10-01 06:11:47.606216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.604 [2024-10-01 06:11:47.606231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:113344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.604 [2024-10-01 06:11:47.606243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.604 [2024-10-01 06:11:47.606257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:113352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.604 [2024-10-01 06:11:47.606269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.604 [2024-10-01 06:11:47.606283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:113360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.604 [2024-10-01 06:11:47.606295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.604 [2024-10-01 06:11:47.606308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:113368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.604 [2024-10-01 06:11:47.606320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.604 [2024-10-01 06:11:47.606334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:113376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.604 [2024-10-01 06:11:47.606346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.604 [2024-10-01 06:11:47.606360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:113384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.604 [2024-10-01 06:11:47.606372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.604 [2024-10-01 06:11:47.606386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:113392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.604 [2024-10-01 06:11:47.606398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.604 [2024-10-01 06:11:47.606413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:113400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.604 [2024-10-01 06:11:47.606425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.604 [2024-10-01 06:11:47.606439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:113408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.604 [2024-10-01 06:11:47.606456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.604 [2024-10-01 06:11:47.606470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:113416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.604 [2024-10-01 06:11:47.606483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.604 [2024-10-01 06:11:47.606496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:113424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.604 [2024-10-01 06:11:47.606509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.604 [2024-10-01 06:11:47.606522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:113432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.604 [2024-10-01 06:11:47.606534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.604 [2024-10-01 06:11:47.606555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:113440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.604 [2024-10-01 06:11:47.606568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.604 [2024-10-01 06:11:47.606581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:112880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.604 [2024-10-01 06:11:47.606593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.604 [2024-10-01 06:11:47.606608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:112888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.604 [2024-10-01 06:11:47.606619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.604 [2024-10-01 06:11:47.606633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:112896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.604 [2024-10-01 06:11:47.606645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.604 [2024-10-01 06:11:47.606658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:112904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.604 [2024-10-01 06:11:47.606670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.604 [2024-10-01 06:11:47.606684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:112912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.604 [2024-10-01 06:11:47.606696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.604 [2024-10-01 06:11:47.606710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:112920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.604 [2024-10-01 06:11:47.606722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.604 [2024-10-01 06:11:47.606736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:112928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.604 [2024-10-01 06:11:47.606748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.604 [2024-10-01 06:11:47.606762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:112936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.604 [2024-10-01 06:11:47.606774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.604 [2024-10-01 06:11:47.606787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:112944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.604 [2024-10-01 06:11:47.606799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.604 [2024-10-01 06:11:47.606813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:112952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.604 [2024-10-01 06:11:47.606825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.604 [2024-10-01 06:11:47.606838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:112960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.604 [2024-10-01 06:11:47.606850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.604 [2024-10-01 06:11:47.606864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:112968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.604 [2024-10-01 06:11:47.606884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.604 [2024-10-01 06:11:47.606908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:112976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.604 [2024-10-01 06:11:47.606923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.604 [2024-10-01 06:11:47.606936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:112984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.604 [2024-10-01 06:11:47.606949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.604 [2024-10-01 06:11:47.606962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:112992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.604 [2024-10-01 06:11:47.606974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.604 [2024-10-01 06:11:47.606988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:113000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.604 [2024-10-01 06:11:47.607000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.604 [2024-10-01 06:11:47.607013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:113448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.605 [2024-10-01 06:11:47.607026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.605 [2024-10-01 06:11:47.607039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:113456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.605 [2024-10-01 06:11:47.607052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.605 [2024-10-01 06:11:47.607065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:113464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.605 [2024-10-01 06:11:47.607077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.605 [2024-10-01 06:11:47.607091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:113472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.605 [2024-10-01 06:11:47.607103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.605 [2024-10-01 06:11:47.607117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:113480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.605 [2024-10-01 06:11:47.607129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.605 [2024-10-01 06:11:47.607142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:113488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.605 [2024-10-01 06:11:47.607156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.605 [2024-10-01 06:11:47.607169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:113496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.605 [2024-10-01 06:11:47.607182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.605 [2024-10-01 06:11:47.607196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:113504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.605 [2024-10-01 06:11:47.607208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.605 [2024-10-01 06:11:47.607228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:113512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:19:28.605 [2024-10-01 06:11:47.607241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.605 [2024-10-01 06:11:47.607255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:113008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.605 [2024-10-01 06:11:47.607267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.605 [2024-10-01 06:11:47.607281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:113016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.605 [2024-10-01 06:11:47.607293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.605 [2024-10-01 06:11:47.607307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:113024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.605 [2024-10-01 06:11:47.607321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.605 [2024-10-01 06:11:47.607335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:113032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.605 [2024-10-01 06:11:47.607347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.605 [2024-10-01 06:11:47.607361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:113040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.605 [2024-10-01 06:11:47.607373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.605 [2024-10-01 06:11:47.607386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:113048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.605 [2024-10-01 06:11:47.607399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.605 [2024-10-01 06:11:47.607413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:113056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:19:28.605 [2024-10-01 06:11:47.607425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.605 [2024-10-01 06:11:47.607437] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa7e870 is same with the state(6) to be set 00:19:28.605 [2024-10-01 06:11:47.607452] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:28.605 [2024-10-01 06:11:47.607462] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:28.605 [2024-10-01 06:11:47.607471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:113064 len:8 PRP1 0x0 PRP2 0x0 00:19:28.605 [2024-10-01 06:11:47.607483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.605 [2024-10-01 06:11:47.607496] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:28.605 [2024-10-01 06:11:47.607505] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:28.605 [2024-10-01 06:11:47.607514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:113520 len:8 PRP1 0x0 PRP2 0x0 00:19:28.605 [2024-10-01 06:11:47.607526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.605 [2024-10-01 06:11:47.607538] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:28.605 [2024-10-01 06:11:47.607547] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:28.605 [2024-10-01 06:11:47.607562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:113528 len:8 PRP1 0x0 PRP2 0x0 00:19:28.605 [2024-10-01 06:11:47.607575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.605 [2024-10-01 06:11:47.607587] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:28.605 [2024-10-01 06:11:47.607596] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:28.605 [2024-10-01 06:11:47.607605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:113536 len:8 PRP1 0x0 PRP2 0x0 00:19:28.605 [2024-10-01 06:11:47.607617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.605 [2024-10-01 06:11:47.607629] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:28.605 [2024-10-01 06:11:47.607638] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:28.605 [2024-10-01 06:11:47.607647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:113544 len:8 PRP1 0x0 PRP2 0x0 00:19:28.605 [2024-10-01 06:11:47.607659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.605 [2024-10-01 06:11:47.607671] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:28.605 [2024-10-01 06:11:47.607679] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:28.605 [2024-10-01 06:11:47.607689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:113552 len:8 PRP1 0x0 PRP2 0x0 00:19:28.605 [2024-10-01 06:11:47.607701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.605 [2024-10-01 06:11:47.607713] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:28.605 [2024-10-01 06:11:47.607722] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:28.605 [2024-10-01 06:11:47.607731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:113560 len:8 PRP1 0x0 PRP2 0x0 00:19:28.605 [2024-10-01 06:11:47.607742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.605 [2024-10-01 06:11:47.607754] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:28.605 [2024-10-01 06:11:47.607763] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:28.605 [2024-10-01 06:11:47.607772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:113568 len:8 PRP1 0x0 PRP2 0x0 00:19:28.605 [2024-10-01 06:11:47.607783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.605 [2024-10-01 06:11:47.607796] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:19:28.605 [2024-10-01 06:11:47.607848] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:28.605 [2024-10-01 06:11:47.607860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:113576 len:8 PRP1 0x0 PRP2 0x0 00:19:28.605 [2024-10-01 06:11:47.607873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.605 [2024-10-01 06:11:47.607931] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xa7e870 was disconnected and freed. reset controller. 00:19:28.605 [2024-10-01 06:11:47.607952] bdev_nvme.c:1987:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.3:4422 to 10.0.0.3:4420 00:19:28.605 [2024-10-01 06:11:47.608004] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:28.605 [2024-10-01 06:11:47.608025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.605 [2024-10-01 06:11:47.608050] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:28.605 [2024-10-01 06:11:47.608065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.605 [2024-10-01 06:11:47.608079] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:28.605 [2024-10-01 06:11:47.608092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.605 [2024-10-01 06:11:47.608106] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:28.605 [2024-10-01 06:11:47.608118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:28.605 [2024-10-01 06:11:47.608131] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:28.605 [2024-10-01 06:11:47.608163] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa5af10 (9): Bad file descriptor 00:19:28.605 [2024-10-01 06:11:47.611666] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:28.605 [2024-10-01 06:11:47.648271] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:28.605 10047.10 IOPS, 39.25 MiB/s 10094.45 IOPS, 39.43 MiB/s 10136.83 IOPS, 39.60 MiB/s 10165.46 IOPS, 39.71 MiB/s 10189.64 IOPS, 39.80 MiB/s 00:19:28.605 Latency(us) 00:19:28.605 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:28.605 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:19:28.605 Verification LBA range: start 0x0 length 0x4000 00:19:28.606 NVMe0n1 : 15.01 10209.68 39.88 242.87 0.00 12217.37 565.99 15609.48 00:19:28.606 =================================================================================================================== 00:19:28.606 Total : 10209.68 39.88 242.87 0.00 12217.37 565.99 15609.48 00:19:28.606 Received shutdown signal, test time was about 15.000000 seconds 00:19:28.606 00:19:28.606 Latency(us) 00:19:28.606 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:28.606 =================================================================================================================== 00:19:28.606 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:28.606 06:11:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:19:28.606 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:28.606 06:11:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:19:28.606 06:11:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:19:28.606 06:11:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=89776 00:19:28.606 06:11:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 89776 /var/tmp/bdevperf.sock 00:19:28.606 06:11:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 89776 ']' 00:19:28.606 06:11:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:19:28.606 06:11:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:28.606 06:11:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:28.606 06:11:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:28.606 06:11:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:28.606 06:11:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:19:29.176 06:11:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:29.176 06:11:54 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:19:29.176 06:11:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:19:29.435 [2024-10-01 06:11:54.796241] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:19:29.435 06:11:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:19:29.696 [2024-10-01 06:11:55.072476] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4422 *** 00:19:29.696 06:11:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:19:29.959 NVMe0n1 00:19:29.959 06:11:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:19:30.219 00:19:30.219 06:11:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:19:30.478 00:19:30.478 06:11:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:30.478 06:11:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:19:30.736 06:11:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:19:30.995 06:11:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:19:34.284 06:11:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:34.284 06:11:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:19:34.284 06:11:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:34.284 06:11:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=89857 00:19:34.284 06:11:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 89857 00:19:35.661 { 00:19:35.661 "results": [ 00:19:35.661 { 00:19:35.661 "job": "NVMe0n1", 00:19:35.661 "core_mask": "0x1", 00:19:35.661 "workload": "verify", 00:19:35.661 "status": "finished", 00:19:35.661 "verify_range": { 00:19:35.661 "start": 0, 00:19:35.661 "length": 16384 00:19:35.661 }, 00:19:35.661 "queue_depth": 128, 00:19:35.661 "io_size": 4096, 00:19:35.661 "runtime": 1.013044, 00:19:35.661 "iops": 7601.841578450689, 00:19:35.661 "mibps": 29.694693665823003, 00:19:35.661 "io_failed": 0, 00:19:35.661 "io_timeout": 0, 00:19:35.661 "avg_latency_us": 16775.849747966615, 00:19:35.661 "min_latency_us": 2144.8145454545456, 00:19:35.661 "max_latency_us": 14537.076363636364 00:19:35.661 } 00:19:35.661 ], 00:19:35.661 "core_count": 1 00:19:35.661 } 00:19:35.661 06:12:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:19:35.661 [2024-10-01 06:11:53.566536] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:19:35.661 [2024-10-01 06:11:53.566639] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89776 ] 00:19:35.661 [2024-10-01 06:11:53.703708] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:35.661 [2024-10-01 06:11:53.737439] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:19:35.661 [2024-10-01 06:11:53.765929] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:35.661 [2024-10-01 06:11:56.482711] bdev_nvme.c:1987:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.3:4420 to 10.0.0.3:4421 00:19:35.661 [2024-10-01 06:11:56.482834] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:35.661 [2024-10-01 06:11:56.482860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.661 [2024-10-01 06:11:56.482878] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:35.661 [2024-10-01 06:11:56.482891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.661 [2024-10-01 06:11:56.482904] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:35.661 [2024-10-01 06:11:56.482929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.661 [2024-10-01 06:11:56.482945] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:35.661 [2024-10-01 06:11:56.482958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:35.661 [2024-10-01 06:11:56.482971] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:35.661 [2024-10-01 06:11:56.483016] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:35.661 [2024-10-01 06:11:56.483044] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x84bf10 (9): Bad file descriptor 00:19:35.661 [2024-10-01 06:11:56.493021] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:35.661 Running I/O for 1 seconds... 00:19:35.661 7572.00 IOPS, 29.58 MiB/s 00:19:35.662 Latency(us) 00:19:35.662 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:35.662 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:19:35.662 Verification LBA range: start 0x0 length 0x4000 00:19:35.662 NVMe0n1 : 1.01 7601.84 29.69 0.00 0.00 16775.85 2144.81 14537.08 00:19:35.662 =================================================================================================================== 00:19:35.662 Total : 7601.84 29.69 0.00 0.00 16775.85 2144.81 14537.08 00:19:35.662 06:12:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:35.662 06:12:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:19:35.662 06:12:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:19:35.920 06:12:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:35.920 06:12:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:19:36.178 06:12:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:19:36.436 06:12:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:19:39.724 06:12:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:39.724 06:12:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:19:39.724 06:12:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 89776 00:19:39.724 06:12:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 89776 ']' 00:19:39.724 06:12:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 89776 00:19:39.724 06:12:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:19:39.724 06:12:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:39.724 06:12:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 89776 00:19:39.724 killing process with pid 89776 00:19:39.724 06:12:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:39.724 06:12:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:39.724 06:12:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 89776' 00:19:39.724 06:12:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 89776 00:19:39.724 06:12:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 89776 00:19:39.982 06:12:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:19:39.982 06:12:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:40.241 06:12:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:19:40.241 06:12:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:19:40.241 06:12:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:19:40.241 06:12:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # nvmfcleanup 00:19:40.241 06:12:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:19:40.241 06:12:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:40.241 06:12:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:19:40.241 06:12:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:40.241 06:12:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:40.241 rmmod nvme_tcp 00:19:40.241 rmmod nvme_fabrics 00:19:40.241 rmmod nvme_keyring 00:19:40.241 06:12:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:40.241 06:12:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:19:40.241 06:12:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:19:40.241 06:12:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@513 -- # '[' -n 89535 ']' 00:19:40.241 06:12:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@514 -- # killprocess 89535 00:19:40.241 06:12:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 89535 ']' 00:19:40.241 06:12:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 89535 00:19:40.241 06:12:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:19:40.241 06:12:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:40.241 06:12:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 89535 00:19:40.241 killing process with pid 89535 00:19:40.241 06:12:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:19:40.241 06:12:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:19:40.241 06:12:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 89535' 00:19:40.241 06:12:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 89535 00:19:40.241 06:12:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 89535 00:19:40.500 06:12:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:19:40.500 06:12:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:19:40.500 06:12:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:19:40.500 06:12:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:19:40.500 06:12:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@787 -- # iptables-save 00:19:40.500 06:12:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:19:40.500 06:12:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@787 -- # iptables-restore 00:19:40.500 06:12:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:40.500 06:12:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:19:40.500 06:12:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:19:40.500 06:12:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:19:40.500 06:12:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:19:40.500 06:12:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:19:40.500 06:12:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:19:40.500 06:12:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:19:40.500 06:12:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:19:40.500 06:12:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:19:40.500 06:12:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:19:40.500 06:12:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:19:40.500 06:12:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:19:40.758 06:12:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:40.758 06:12:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:40.758 06:12:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@246 -- # remove_spdk_ns 00:19:40.758 06:12:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:40.758 06:12:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:40.758 06:12:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:40.758 06:12:06 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@300 -- # return 0 00:19:40.758 00:19:40.758 real 0m32.167s 00:19:40.758 user 2m3.861s 00:19:40.758 sys 0m5.334s 00:19:40.758 06:12:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:40.758 06:12:06 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:19:40.758 ************************************ 00:19:40.758 END TEST nvmf_failover 00:19:40.758 ************************************ 00:19:40.758 06:12:06 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:19:40.758 06:12:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:19:40.758 06:12:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:40.758 06:12:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:19:40.758 ************************************ 00:19:40.758 START TEST nvmf_host_discovery 00:19:40.758 ************************************ 00:19:40.758 06:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:19:40.758 * Looking for test storage... 00:19:40.758 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:40.758 06:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:19:40.758 06:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1681 -- # lcov --version 00:19:40.758 06:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:19:41.018 06:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:19:41.018 06:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:41.018 06:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:41.018 06:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:41.018 06:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:19:41.018 06:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:19:41.018 06:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:19:41.018 06:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:19:41.018 06:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:19:41.018 06:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:19:41.018 06:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:19:41.018 06:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:41.018 06:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:19:41.018 06:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:19:41.018 06:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:41.018 06:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:41.018 06:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:19:41.018 06:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:19:41.018 06:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:41.018 06:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:19:41.018 06:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:19:41.018 06:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:19:41.018 06:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:19:41.018 06:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:41.018 06:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:19:41.018 06:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:19:41.018 06:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:41.018 06:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:41.018 06:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:19:41.018 06:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:41.018 06:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:19:41.018 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:41.018 --rc genhtml_branch_coverage=1 00:19:41.018 --rc genhtml_function_coverage=1 00:19:41.018 --rc genhtml_legend=1 00:19:41.018 --rc geninfo_all_blocks=1 00:19:41.018 --rc geninfo_unexecuted_blocks=1 00:19:41.018 00:19:41.018 ' 00:19:41.018 06:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:19:41.018 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:41.018 --rc genhtml_branch_coverage=1 00:19:41.018 --rc genhtml_function_coverage=1 00:19:41.018 --rc genhtml_legend=1 00:19:41.018 --rc geninfo_all_blocks=1 00:19:41.018 --rc geninfo_unexecuted_blocks=1 00:19:41.018 00:19:41.018 ' 00:19:41.018 06:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:19:41.018 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:41.018 --rc genhtml_branch_coverage=1 00:19:41.018 --rc genhtml_function_coverage=1 00:19:41.018 --rc genhtml_legend=1 00:19:41.018 --rc geninfo_all_blocks=1 00:19:41.018 --rc geninfo_unexecuted_blocks=1 00:19:41.018 00:19:41.018 ' 00:19:41.018 06:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:19:41.018 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:41.018 --rc genhtml_branch_coverage=1 00:19:41.018 --rc genhtml_function_coverage=1 00:19:41.018 --rc genhtml_legend=1 00:19:41.018 --rc geninfo_all_blocks=1 00:19:41.018 --rc geninfo_unexecuted_blocks=1 00:19:41.018 00:19:41.018 ' 00:19:41.018 06:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:41.018 06:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:19:41.018 06:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:41.018 06:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:41.018 06:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:41.018 06:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:41.018 06:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:41.018 06:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:41.018 06:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:41.018 06:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:41.018 06:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:41.018 06:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:41.018 06:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 00:19:41.018 06:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=a979a798-a221-4879-b3c4-5aaa753fde06 00:19:41.018 06:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:41.018 06:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:41.018 06:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:41.018 06:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:41.018 06:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:41.018 06:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:19:41.018 06:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:41.018 06:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:41.018 06:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:41.018 06:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:41.018 06:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:41.018 06:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:41.018 06:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:19:41.018 06:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:41.018 06:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:19:41.018 06:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:41.018 06:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:41.018 06:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:41.018 06:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:41.018 06:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:41.018 06:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:41.018 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:41.018 06:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:41.018 06:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:41.018 06:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:41.018 06:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:19:41.018 06:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:19:41.018 06:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:19:41.018 06:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:19:41.018 06:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:19:41.018 06:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:19:41.018 06:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:19:41.019 06:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:19:41.019 06:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:41.019 06:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@472 -- # prepare_net_devs 00:19:41.019 06:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@434 -- # local -g is_hw=no 00:19:41.019 06:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@436 -- # remove_spdk_ns 00:19:41.019 06:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:41.019 06:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:41.019 06:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:41.019 06:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:19:41.019 06:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:19:41.019 06:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:19:41.019 06:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:19:41.019 06:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:19:41.019 06:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@456 -- # nvmf_veth_init 00:19:41.019 06:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:41.019 06:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:19:41.019 06:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:19:41.019 06:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:19:41.019 06:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:41.019 06:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:19:41.019 06:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:41.019 06:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:19:41.019 06:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:41.019 06:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:19:41.019 06:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:41.019 06:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:41.019 06:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:41.019 06:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:41.019 06:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:41.019 06:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:41.019 06:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:19:41.019 Cannot find device "nvmf_init_br" 00:19:41.019 06:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@162 -- # true 00:19:41.019 06:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:19:41.019 Cannot find device "nvmf_init_br2" 00:19:41.019 06:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@163 -- # true 00:19:41.019 06:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:19:41.019 Cannot find device "nvmf_tgt_br" 00:19:41.019 06:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@164 -- # true 00:19:41.019 06:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:19:41.019 Cannot find device "nvmf_tgt_br2" 00:19:41.019 06:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@165 -- # true 00:19:41.019 06:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:19:41.019 Cannot find device "nvmf_init_br" 00:19:41.019 06:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@166 -- # true 00:19:41.019 06:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:19:41.019 Cannot find device "nvmf_init_br2" 00:19:41.019 06:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@167 -- # true 00:19:41.019 06:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:19:41.019 Cannot find device "nvmf_tgt_br" 00:19:41.019 06:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@168 -- # true 00:19:41.019 06:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:19:41.019 Cannot find device "nvmf_tgt_br2" 00:19:41.019 06:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@169 -- # true 00:19:41.019 06:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:19:41.019 Cannot find device "nvmf_br" 00:19:41.019 06:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@170 -- # true 00:19:41.019 06:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:19:41.019 Cannot find device "nvmf_init_if" 00:19:41.019 06:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@171 -- # true 00:19:41.019 06:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:19:41.019 Cannot find device "nvmf_init_if2" 00:19:41.019 06:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@172 -- # true 00:19:41.019 06:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:41.019 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:41.019 06:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@173 -- # true 00:19:41.019 06:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:41.019 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:41.019 06:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@174 -- # true 00:19:41.019 06:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:19:41.019 06:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:41.019 06:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:19:41.019 06:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:41.019 06:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:41.278 06:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:41.278 06:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:41.278 06:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:41.278 06:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:19:41.278 06:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:19:41.278 06:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:19:41.278 06:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:19:41.278 06:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:19:41.278 06:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:19:41.278 06:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:19:41.279 06:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:19:41.279 06:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:19:41.279 06:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:41.279 06:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:41.279 06:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:41.279 06:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:19:41.279 06:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:19:41.279 06:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:19:41.279 06:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:19:41.279 06:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:41.279 06:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:41.279 06:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:41.279 06:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:19:41.279 06:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:19:41.279 06:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:19:41.279 06:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:41.279 06:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:19:41.279 06:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:19:41.279 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:41.279 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.071 ms 00:19:41.279 00:19:41.279 --- 10.0.0.3 ping statistics --- 00:19:41.279 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:41.279 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:19:41.279 06:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:19:41.279 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:19:41.279 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.067 ms 00:19:41.279 00:19:41.279 --- 10.0.0.4 ping statistics --- 00:19:41.279 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:41.279 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:19:41.279 06:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:41.279 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:41.279 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:19:41.279 00:19:41.279 --- 10.0.0.1 ping statistics --- 00:19:41.279 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:41.279 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:19:41.279 06:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:19:41.279 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:41.279 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.048 ms 00:19:41.279 00:19:41.279 --- 10.0.0.2 ping statistics --- 00:19:41.279 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:41.279 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:19:41.279 06:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:41.279 06:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@457 -- # return 0 00:19:41.279 06:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:19:41.279 06:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:41.279 06:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:19:41.279 06:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:19:41.279 06:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:41.279 06:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:19:41.279 06:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:19:41.279 06:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:19:41.279 06:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:19:41.279 06:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:41.279 06:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:41.279 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:41.279 06:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@505 -- # nvmfpid=90188 00:19:41.279 06:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:41.279 06:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@506 -- # waitforlisten 90188 00:19:41.279 06:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@831 -- # '[' -z 90188 ']' 00:19:41.279 06:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:41.279 06:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:41.279 06:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:41.279 06:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:41.279 06:12:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:41.538 [2024-10-01 06:12:06.910028] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:19:41.538 [2024-10-01 06:12:06.910485] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:41.538 [2024-10-01 06:12:07.044472] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:41.538 [2024-10-01 06:12:07.087075] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:41.538 [2024-10-01 06:12:07.087373] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:41.539 [2024-10-01 06:12:07.087582] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:41.539 [2024-10-01 06:12:07.087779] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:41.539 [2024-10-01 06:12:07.087992] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:41.539 [2024-10-01 06:12:07.088208] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:19:41.539 [2024-10-01 06:12:07.122451] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:41.798 06:12:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:41.798 06:12:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # return 0 00:19:41.798 06:12:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:19:41.798 06:12:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:41.798 06:12:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:41.798 06:12:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:41.798 06:12:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:41.798 06:12:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:41.798 06:12:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:41.798 [2024-10-01 06:12:07.229029] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:41.798 06:12:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:41.798 06:12:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.3 -s 8009 00:19:41.798 06:12:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:41.798 06:12:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:41.798 [2024-10-01 06:12:07.237166] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:19:41.798 06:12:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:41.798 06:12:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:19:41.798 06:12:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:41.798 06:12:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:41.798 null0 00:19:41.798 06:12:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:41.798 06:12:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:19:41.798 06:12:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:41.798 06:12:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:41.798 null1 00:19:41.799 06:12:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:41.799 06:12:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:19:41.799 06:12:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:41.799 06:12:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:41.799 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:19:41.799 06:12:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:41.799 06:12:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=90214 00:19:41.799 06:12:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:19:41.799 06:12:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 90214 /tmp/host.sock 00:19:41.799 06:12:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@831 -- # '[' -z 90214 ']' 00:19:41.799 06:12:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/tmp/host.sock 00:19:41.799 06:12:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:41.799 06:12:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:19:41.799 06:12:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:41.799 06:12:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:41.799 [2024-10-01 06:12:07.323341] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:19:41.799 [2024-10-01 06:12:07.323611] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90214 ] 00:19:42.057 [2024-10-01 06:12:07.463870] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:42.057 [2024-10-01 06:12:07.505116] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:19:42.057 [2024-10-01 06:12:07.537412] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:42.057 06:12:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:42.057 06:12:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # return 0 00:19:42.057 06:12:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:42.057 06:12:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:19:42.057 06:12:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:42.057 06:12:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:42.057 06:12:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:42.057 06:12:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:19:42.057 06:12:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:42.057 06:12:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:42.057 06:12:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:42.057 06:12:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:19:42.057 06:12:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:19:42.057 06:12:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:19:42.057 06:12:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:19:42.057 06:12:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:42.057 06:12:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:19:42.057 06:12:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:42.058 06:12:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:19:42.058 06:12:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:42.058 06:12:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:19:42.058 06:12:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:19:42.058 06:12:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:19:42.058 06:12:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:42.058 06:12:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:42.058 06:12:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:42.058 06:12:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:19:42.058 06:12:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:19:42.058 06:12:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:42.317 06:12:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:19:42.317 06:12:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:19:42.317 06:12:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:42.317 06:12:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:42.317 06:12:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:42.317 06:12:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:19:42.317 06:12:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:19:42.317 06:12:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:19:42.317 06:12:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:42.317 06:12:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:42.317 06:12:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:19:42.317 06:12:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:19:42.317 06:12:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:42.317 06:12:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:19:42.317 06:12:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:19:42.317 06:12:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:42.317 06:12:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:19:42.317 06:12:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:19:42.317 06:12:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:42.317 06:12:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:19:42.317 06:12:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:42.317 06:12:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:42.317 06:12:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:19:42.317 06:12:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:19:42.317 06:12:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:42.317 06:12:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:42.317 06:12:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:42.317 06:12:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:19:42.317 06:12:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:19:42.317 06:12:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:19:42.317 06:12:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:42.317 06:12:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:19:42.317 06:12:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:42.317 06:12:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:19:42.317 06:12:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:42.317 06:12:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:19:42.317 06:12:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:19:42.317 06:12:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:42.317 06:12:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:19:42.317 06:12:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:19:42.317 06:12:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:42.317 06:12:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:19:42.317 06:12:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:42.317 06:12:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:42.575 06:12:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:19:42.575 06:12:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:19:42.575 06:12:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:42.575 06:12:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:42.575 [2024-10-01 06:12:07.949329] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:42.575 06:12:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:42.575 06:12:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:19:42.575 06:12:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:19:42.575 06:12:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:42.576 06:12:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:42.576 06:12:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:19:42.576 06:12:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:19:42.576 06:12:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:19:42.576 06:12:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:42.576 06:12:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:19:42.576 06:12:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:19:42.576 06:12:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:42.576 06:12:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:19:42.576 06:12:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:19:42.576 06:12:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:19:42.576 06:12:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:42.576 06:12:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:42.576 06:12:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:42.576 06:12:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:19:42.576 06:12:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:19:42.576 06:12:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:19:42.576 06:12:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:19:42.576 06:12:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:19:42.576 06:12:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:19:42.576 06:12:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:19:42.576 06:12:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:19:42.576 06:12:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:19:42.576 06:12:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:19:42.576 06:12:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:19:42.576 06:12:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:42.576 06:12:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:42.576 06:12:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:42.576 06:12:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:19:42.576 06:12:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:19:42.576 06:12:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:19:42.576 06:12:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:19:42.576 06:12:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:19:42.576 06:12:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:42.576 06:12:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:42.576 06:12:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:42.576 06:12:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:19:42.576 06:12:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:19:42.576 06:12:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:19:42.576 06:12:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:19:42.576 06:12:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:19:42.576 06:12:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:19:42.576 06:12:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:19:42.576 06:12:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:19:42.576 06:12:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:19:42.576 06:12:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:42.576 06:12:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:19:42.576 06:12:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:42.576 06:12:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:42.576 06:12:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == \n\v\m\e\0 ]] 00:19:42.576 06:12:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # sleep 1 00:19:43.143 [2024-10-01 06:12:08.601975] bdev_nvme.c:7162:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:19:43.143 [2024-10-01 06:12:08.601999] bdev_nvme.c:7242:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:19:43.143 [2024-10-01 06:12:08.602016] bdev_nvme.c:7125:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:19:43.143 [2024-10-01 06:12:08.608021] bdev_nvme.c:7091:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme0 00:19:43.143 [2024-10-01 06:12:08.664494] bdev_nvme.c:6981:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:19:43.143 [2024-10-01 06:12:08.664669] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:19:43.708 06:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:19:43.708 06:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:19:43.708 06:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:19:43.708 06:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:19:43.708 06:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:43.708 06:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:19:43.708 06:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:19:43.708 06:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:43.708 06:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:19:43.709 06:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:43.709 06:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:43.709 06:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:19:43.709 06:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:19:43.709 06:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:19:43.709 06:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:19:43.709 06:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:19:43.709 06:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:19:43.709 06:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:19:43.709 06:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:19:43.709 06:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:43.709 06:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:43.709 06:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:43.709 06:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:19:43.709 06:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:19:43.709 06:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:43.709 06:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:19:43.709 06:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:19:43.709 06:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:19:43.709 06:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:19:43.709 06:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:19:43.709 06:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:19:43.709 06:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:19:43.709 06:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:19:43.709 06:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:19:43.709 06:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:19:43.709 06:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:19:43.709 06:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:43.709 06:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:19:43.709 06:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:43.709 06:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:43.968 06:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 == \4\4\2\0 ]] 00:19:43.968 06:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:19:43.968 06:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:19:43.968 06:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:19:43.968 06:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:19:43.968 06:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:19:43.968 06:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:19:43.968 06:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:19:43.968 06:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:19:43.968 06:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:19:43.968 06:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:19:43.968 06:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:19:43.968 06:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:43.968 06:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:43.968 06:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:43.968 06:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:19:43.968 06:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:19:43.968 06:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:19:43.968 06:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:19:43.968 06:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:19:43.968 06:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:43.968 06:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:43.968 06:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:43.968 06:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:19:43.968 06:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:19:43.968 06:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:19:43.968 06:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:19:43.968 06:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:19:43.968 06:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:19:43.968 06:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:43.968 06:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:43.968 06:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:43.968 06:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:19:43.968 06:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:19:43.968 06:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:19:43.968 06:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:43.968 06:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:19:43.968 06:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:19:43.968 06:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:19:43.968 06:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:19:43.968 06:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:19:43.968 06:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:19:43.968 06:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:19:43.968 06:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:19:43.968 06:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:19:43.968 06:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:19:43.968 06:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:19:43.968 06:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:19:43.968 06:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:43.968 06:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:43.968 06:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:43.968 06:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:19:43.968 06:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:19:43.968 06:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:19:43.968 06:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:19:43.968 06:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4421 00:19:43.968 06:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:43.968 06:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:43.968 [2024-10-01 06:12:09.510479] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:19:43.968 [2024-10-01 06:12:09.510779] bdev_nvme.c:7144:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:19:43.968 [2024-10-01 06:12:09.510805] bdev_nvme.c:7125:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:19:43.968 06:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:43.968 06:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:19:43.968 06:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:19:43.968 06:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:19:43.968 06:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:19:43.968 06:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:19:43.968 [2024-10-01 06:12:09.516791] bdev_nvme.c:7086:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 new path for nvme0 00:19:43.968 06:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:19:43.968 06:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:19:43.968 06:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:19:43.968 06:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:43.968 06:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:19:43.968 06:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:43.968 06:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:19:43.968 06:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:43.968 06:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:43.968 06:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:19:43.968 06:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:19:43.968 06:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:19:43.968 06:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:19:43.968 06:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:19:43.968 06:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:19:43.968 06:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:19:43.968 [2024-10-01 06:12:09.576174] bdev_nvme.c:6981:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:19:43.968 [2024-10-01 06:12:09.576222] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:19:43.968 [2024-10-01 06:12:09.576229] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:19:43.968 06:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:43.968 06:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:19:43.968 06:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:19:43.968 06:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:43.969 06:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:19:43.969 06:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:44.227 06:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:44.227 06:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:19:44.228 06:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:19:44.228 06:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:19:44.228 06:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:19:44.228 06:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:19:44.228 06:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:19:44.228 06:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:19:44.228 06:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:19:44.228 06:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:19:44.228 06:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:19:44.228 06:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:44.228 06:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:44.228 06:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:19:44.228 06:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:19:44.228 06:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:44.228 06:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:19:44.228 06:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:19:44.228 06:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:19:44.228 06:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:19:44.228 06:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:19:44.228 06:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:19:44.228 06:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:19:44.228 06:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:19:44.228 06:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:19:44.228 06:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:19:44.228 06:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:19:44.228 06:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:19:44.228 06:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:44.228 06:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:44.228 06:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:44.228 06:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:19:44.228 06:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:19:44.228 06:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:19:44.228 06:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:19:44.228 06:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:19:44.228 06:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:44.228 06:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:44.228 [2024-10-01 06:12:09.739236] bdev_nvme.c:7144:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:19:44.228 [2024-10-01 06:12:09.739309] bdev_nvme.c:7125:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:19:44.228 [2024-10-01 06:12:09.742484] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:19:44.228 [2024-10-01 06:12:09.742677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.228 [2024-10-01 06:12:09.742891] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:19:44.228 [2024-10-01 06:12:09.743164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.228 [2024-10-01 06:12:09.743395] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:19:44.228 06:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:44.228 [2024-10-01 06:12:09.743694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.228 06:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:19:44.228 [2024-10-01 06:12:09.743759] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:19:44.228 [2024-10-01 06:12:09.743774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:44.228 [2024-10-01 06:12:09.743783] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf72480 is same with the state(6) to be set 00:19:44.228 06:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:19:44.228 06:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:19:44.228 06:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:19:44.228 06:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:19:44.228 [2024-10-01 06:12:09.745843] bdev_nvme.c:6949:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 not found 00:19:44.228 [2024-10-01 06:12:09.745865] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:19:44.228 [2024-10-01 06:12:09.745959] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf72480 (9): Bad file descriptor 00:19:44.228 06:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:19:44.228 06:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:19:44.228 06:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:19:44.228 06:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:44.228 06:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:19:44.228 06:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:44.228 06:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:19:44.228 06:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:44.228 06:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:44.228 06:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:19:44.228 06:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:19:44.228 06:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:19:44.228 06:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:19:44.228 06:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:19:44.228 06:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:19:44.228 06:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:19:44.228 06:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:44.228 06:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:44.228 06:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:19:44.228 06:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:19:44.228 06:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:44.228 06:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:19:44.228 06:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:44.487 06:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:19:44.487 06:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:19:44.487 06:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:19:44.487 06:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:19:44.487 06:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:19:44.487 06:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:19:44.487 06:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:19:44.487 06:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:19:44.487 06:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:19:44.487 06:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:44.487 06:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:44.487 06:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:19:44.487 06:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:19:44.487 06:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:19:44.487 06:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:44.487 06:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4421 == \4\4\2\1 ]] 00:19:44.487 06:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:19:44.487 06:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:19:44.487 06:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:19:44.487 06:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:19:44.487 06:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:19:44.487 06:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:19:44.487 06:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:19:44.487 06:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:19:44.487 06:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:19:44.487 06:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:19:44.487 06:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:19:44.488 06:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:44.488 06:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:44.488 06:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:44.488 06:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:19:44.488 06:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:19:44.488 06:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:19:44.488 06:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:19:44.488 06:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:19:44.488 06:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:44.488 06:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:44.488 06:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:44.488 06:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:19:44.488 06:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:19:44.488 06:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:19:44.488 06:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:19:44.488 06:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:19:44.488 06:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:19:44.488 06:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:19:44.488 06:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:44.488 06:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:44.488 06:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:19:44.488 06:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:19:44.488 06:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:19:44.488 06:12:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:44.488 06:12:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == '' ]] 00:19:44.488 06:12:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:19:44.488 06:12:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:19:44.488 06:12:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:19:44.488 06:12:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:19:44.488 06:12:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:19:44.488 06:12:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:19:44.488 06:12:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:19:44.488 06:12:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:44.488 06:12:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:44.488 06:12:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:44.488 06:12:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:19:44.488 06:12:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:19:44.488 06:12:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:19:44.488 06:12:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:44.488 06:12:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == '' ]] 00:19:44.488 06:12:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:19:44.488 06:12:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:19:44.488 06:12:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:19:44.488 06:12:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:19:44.488 06:12:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:19:44.488 06:12:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:19:44.488 06:12:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:19:44.488 06:12:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:19:44.488 06:12:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:19:44.746 06:12:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:19:44.746 06:12:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:44.746 06:12:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:44.746 06:12:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:19:44.747 06:12:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:44.747 06:12:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:19:44.747 06:12:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:19:44.747 06:12:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:19:44.747 06:12:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:19:44.747 06:12:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:19:44.747 06:12:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:44.747 06:12:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:45.682 [2024-10-01 06:12:11.162633] bdev_nvme.c:7162:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:19:45.682 [2024-10-01 06:12:11.162658] bdev_nvme.c:7242:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:19:45.682 [2024-10-01 06:12:11.162675] bdev_nvme.c:7125:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:19:45.682 [2024-10-01 06:12:11.168662] bdev_nvme.c:7091:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 new subsystem nvme0 00:19:45.682 [2024-10-01 06:12:11.229466] bdev_nvme.c:6981:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:19:45.682 [2024-10-01 06:12:11.229503] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:19:45.682 06:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:45.682 06:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:19:45.682 06:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:19:45.682 06:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:19:45.682 06:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:19:45.682 06:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:45.682 06:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:19:45.682 06:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:45.683 06:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:19:45.683 06:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:45.683 06:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:45.683 request: 00:19:45.683 { 00:19:45.683 "name": "nvme", 00:19:45.683 "trtype": "tcp", 00:19:45.683 "traddr": "10.0.0.3", 00:19:45.683 "adrfam": "ipv4", 00:19:45.683 "trsvcid": "8009", 00:19:45.683 "hostnqn": "nqn.2021-12.io.spdk:test", 00:19:45.683 "wait_for_attach": true, 00:19:45.683 "method": "bdev_nvme_start_discovery", 00:19:45.683 "req_id": 1 00:19:45.683 } 00:19:45.683 Got JSON-RPC error response 00:19:45.683 response: 00:19:45.683 { 00:19:45.683 "code": -17, 00:19:45.683 "message": "File exists" 00:19:45.683 } 00:19:45.683 06:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:19:45.683 06:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:19:45.683 06:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:45.683 06:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:45.683 06:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:45.683 06:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:19:45.683 06:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:19:45.683 06:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:19:45.683 06:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:19:45.683 06:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:45.683 06:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:19:45.683 06:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:45.683 06:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:45.941 06:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:19:45.941 06:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:19:45.941 06:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:19:45.941 06:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:45.941 06:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:19:45.941 06:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:19:45.941 06:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:45.941 06:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:45.941 06:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:45.941 06:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:19:45.941 06:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:19:45.941 06:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:19:45.941 06:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:19:45.941 06:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:19:45.941 06:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:45.941 06:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:19:45.941 06:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:45.941 06:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:19:45.941 06:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:45.941 06:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:45.941 request: 00:19:45.941 { 00:19:45.941 "name": "nvme_second", 00:19:45.942 "trtype": "tcp", 00:19:45.942 "traddr": "10.0.0.3", 00:19:45.942 "adrfam": "ipv4", 00:19:45.942 "trsvcid": "8009", 00:19:45.942 "hostnqn": "nqn.2021-12.io.spdk:test", 00:19:45.942 "wait_for_attach": true, 00:19:45.942 "method": "bdev_nvme_start_discovery", 00:19:45.942 "req_id": 1 00:19:45.942 } 00:19:45.942 Got JSON-RPC error response 00:19:45.942 response: 00:19:45.942 { 00:19:45.942 "code": -17, 00:19:45.942 "message": "File exists" 00:19:45.942 } 00:19:45.942 06:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:19:45.942 06:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:19:45.942 06:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:45.942 06:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:45.942 06:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:45.942 06:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:19:45.942 06:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:19:45.942 06:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:19:45.942 06:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:45.942 06:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:45.942 06:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:19:45.942 06:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:19:45.942 06:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:45.942 06:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:19:45.942 06:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:19:45.942 06:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:19:45.942 06:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:19:45.942 06:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:45.942 06:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:19:45.942 06:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:45.942 06:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:19:45.942 06:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:45.942 06:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:19:45.942 06:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:19:45.942 06:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:19:45.942 06:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:19:45.942 06:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:19:45.942 06:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:45.942 06:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:19:45.942 06:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:45.942 06:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:19:45.942 06:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:45.942 06:12:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:46.900 [2024-10-01 06:12:12.493928] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:46.900 [2024-10-01 06:12:12.493988] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf67db0 with addr=10.0.0.3, port=8010 00:19:46.900 [2024-10-01 06:12:12.494006] nvme_tcp.c:2723:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:19:46.900 [2024-10-01 06:12:12.494027] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:19:46.900 [2024-10-01 06:12:12.494035] bdev_nvme.c:7224:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] could not start discovery connect 00:19:48.284 [2024-10-01 06:12:13.493887] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:48.285 [2024-10-01 06:12:13.493953] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf67db0 with addr=10.0.0.3, port=8010 00:19:48.285 [2024-10-01 06:12:13.493969] nvme_tcp.c:2723:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:19:48.285 [2024-10-01 06:12:13.493978] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:19:48.285 [2024-10-01 06:12:13.493985] bdev_nvme.c:7224:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] could not start discovery connect 00:19:49.222 [2024-10-01 06:12:14.493810] bdev_nvme.c:7205:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] timed out while attaching discovery ctrlr 00:19:49.222 request: 00:19:49.222 { 00:19:49.222 "name": "nvme_second", 00:19:49.222 "trtype": "tcp", 00:19:49.222 "traddr": "10.0.0.3", 00:19:49.222 "adrfam": "ipv4", 00:19:49.222 "trsvcid": "8010", 00:19:49.222 "hostnqn": "nqn.2021-12.io.spdk:test", 00:19:49.222 "wait_for_attach": false, 00:19:49.222 "attach_timeout_ms": 3000, 00:19:49.222 "method": "bdev_nvme_start_discovery", 00:19:49.222 "req_id": 1 00:19:49.222 } 00:19:49.222 Got JSON-RPC error response 00:19:49.222 response: 00:19:49.222 { 00:19:49.222 "code": -110, 00:19:49.222 "message": "Connection timed out" 00:19:49.222 } 00:19:49.222 06:12:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:19:49.222 06:12:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:19:49.222 06:12:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:49.222 06:12:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:49.222 06:12:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:49.222 06:12:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:19:49.222 06:12:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:19:49.222 06:12:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:19:49.222 06:12:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:49.222 06:12:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:19:49.222 06:12:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:49.222 06:12:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:19:49.222 06:12:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:49.222 06:12:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:19:49.222 06:12:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:19:49.222 06:12:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 90214 00:19:49.222 06:12:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:19:49.222 06:12:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # nvmfcleanup 00:19:49.222 06:12:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:19:49.222 06:12:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:49.222 06:12:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:19:49.222 06:12:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:49.222 06:12:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:49.222 rmmod nvme_tcp 00:19:49.222 rmmod nvme_fabrics 00:19:49.222 rmmod nvme_keyring 00:19:49.222 06:12:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:49.222 06:12:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:19:49.222 06:12:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:19:49.222 06:12:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@513 -- # '[' -n 90188 ']' 00:19:49.222 06:12:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@514 -- # killprocess 90188 00:19:49.222 06:12:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@950 -- # '[' -z 90188 ']' 00:19:49.222 06:12:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # kill -0 90188 00:19:49.222 06:12:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@955 -- # uname 00:19:49.222 06:12:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:49.222 06:12:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 90188 00:19:49.222 killing process with pid 90188 00:19:49.222 06:12:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:19:49.222 06:12:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:19:49.222 06:12:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 90188' 00:19:49.222 06:12:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@969 -- # kill 90188 00:19:49.222 06:12:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@974 -- # wait 90188 00:19:49.481 06:12:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:19:49.481 06:12:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:19:49.481 06:12:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:19:49.481 06:12:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:19:49.481 06:12:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@787 -- # iptables-save 00:19:49.481 06:12:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:19:49.481 06:12:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@787 -- # iptables-restore 00:19:49.481 06:12:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:49.481 06:12:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:19:49.481 06:12:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:19:49.481 06:12:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:19:49.481 06:12:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:19:49.481 06:12:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:19:49.481 06:12:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:19:49.481 06:12:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:19:49.481 06:12:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:19:49.481 06:12:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:19:49.481 06:12:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:19:49.481 06:12:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:19:49.481 06:12:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:19:49.481 06:12:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:49.481 06:12:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:49.481 06:12:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@246 -- # remove_spdk_ns 00:19:49.481 06:12:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:49.481 06:12:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:49.481 06:12:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:49.741 06:12:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@300 -- # return 0 00:19:49.741 00:19:49.741 real 0m8.863s 00:19:49.741 user 0m16.883s 00:19:49.741 sys 0m1.864s 00:19:49.741 06:12:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:49.741 06:12:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:19:49.741 ************************************ 00:19:49.741 END TEST nvmf_host_discovery 00:19:49.741 ************************************ 00:19:49.741 06:12:15 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:19:49.741 06:12:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:19:49.741 06:12:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:49.741 06:12:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:19:49.741 ************************************ 00:19:49.742 START TEST nvmf_host_multipath_status 00:19:49.742 ************************************ 00:19:49.742 06:12:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:19:49.742 * Looking for test storage... 00:19:49.742 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:49.742 06:12:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:19:49.742 06:12:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1681 -- # lcov --version 00:19:49.742 06:12:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:19:49.742 06:12:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:19:49.742 06:12:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:49.742 06:12:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:49.742 06:12:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:49.742 06:12:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:19:49.742 06:12:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:19:49.742 06:12:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:19:49.742 06:12:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:19:49.742 06:12:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:19:49.742 06:12:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:19:49.742 06:12:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:19:49.742 06:12:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:49.742 06:12:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:19:49.742 06:12:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:19:49.742 06:12:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:49.742 06:12:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:49.742 06:12:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:19:49.742 06:12:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:19:49.742 06:12:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:49.742 06:12:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:19:49.742 06:12:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:19:49.742 06:12:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:19:49.742 06:12:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:19:49.742 06:12:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:49.742 06:12:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:19:49.742 06:12:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:19:49.742 06:12:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:49.742 06:12:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:49.742 06:12:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:19:49.742 06:12:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:49.742 06:12:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:19:49.742 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:49.742 --rc genhtml_branch_coverage=1 00:19:49.742 --rc genhtml_function_coverage=1 00:19:49.742 --rc genhtml_legend=1 00:19:49.742 --rc geninfo_all_blocks=1 00:19:49.742 --rc geninfo_unexecuted_blocks=1 00:19:49.742 00:19:49.742 ' 00:19:49.742 06:12:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:19:49.742 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:49.742 --rc genhtml_branch_coverage=1 00:19:49.742 --rc genhtml_function_coverage=1 00:19:49.742 --rc genhtml_legend=1 00:19:49.742 --rc geninfo_all_blocks=1 00:19:49.742 --rc geninfo_unexecuted_blocks=1 00:19:49.742 00:19:49.742 ' 00:19:49.742 06:12:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:19:49.742 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:49.742 --rc genhtml_branch_coverage=1 00:19:49.742 --rc genhtml_function_coverage=1 00:19:49.742 --rc genhtml_legend=1 00:19:49.742 --rc geninfo_all_blocks=1 00:19:49.742 --rc geninfo_unexecuted_blocks=1 00:19:49.742 00:19:49.742 ' 00:19:49.742 06:12:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:19:49.742 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:49.742 --rc genhtml_branch_coverage=1 00:19:49.742 --rc genhtml_function_coverage=1 00:19:49.742 --rc genhtml_legend=1 00:19:49.742 --rc geninfo_all_blocks=1 00:19:49.742 --rc geninfo_unexecuted_blocks=1 00:19:49.742 00:19:49.742 ' 00:19:49.742 06:12:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:49.742 06:12:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:19:49.742 06:12:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:49.742 06:12:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:49.742 06:12:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:49.742 06:12:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:49.742 06:12:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:49.742 06:12:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:49.742 06:12:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:49.742 06:12:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:49.742 06:12:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:49.742 06:12:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:49.742 06:12:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 00:19:49.742 06:12:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=a979a798-a221-4879-b3c4-5aaa753fde06 00:19:49.742 06:12:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:49.742 06:12:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:49.742 06:12:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:49.742 06:12:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:49.742 06:12:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:49.742 06:12:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:19:49.742 06:12:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:49.742 06:12:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:49.742 06:12:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:49.742 06:12:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:49.742 06:12:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:49.742 06:12:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:49.742 06:12:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:19:49.742 06:12:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:49.742 06:12:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:19:49.742 06:12:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:49.742 06:12:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:49.742 06:12:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:49.742 06:12:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:49.742 06:12:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:49.742 06:12:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:49.743 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:49.743 06:12:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:49.743 06:12:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:49.743 06:12:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:49.743 06:12:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:19:49.743 06:12:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:19:49.743 06:12:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:49.743 06:12:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:19:49.743 06:12:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:49.743 06:12:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:19:49.743 06:12:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:19:49.743 06:12:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:19:49.743 06:12:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:49.743 06:12:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@472 -- # prepare_net_devs 00:19:49.743 06:12:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@434 -- # local -g is_hw=no 00:19:49.743 06:12:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@436 -- # remove_spdk_ns 00:19:49.743 06:12:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:49.743 06:12:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:49.743 06:12:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:49.743 06:12:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:19:49.743 06:12:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:19:49.743 06:12:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:19:49.743 06:12:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:19:49.743 06:12:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:19:49.743 06:12:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@456 -- # nvmf_veth_init 00:19:49.743 06:12:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:49.743 06:12:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:19:49.743 06:12:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:19:49.743 06:12:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:19:49.743 06:12:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:49.743 06:12:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:19:49.743 06:12:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:49.743 06:12:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:19:49.743 06:12:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:49.743 06:12:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:19:49.743 06:12:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:49.743 06:12:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:49.743 06:12:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:49.743 06:12:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:49.743 06:12:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:49.743 06:12:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:49.743 06:12:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:19:50.002 Cannot find device "nvmf_init_br" 00:19:50.002 06:12:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # true 00:19:50.002 06:12:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:19:50.002 Cannot find device "nvmf_init_br2" 00:19:50.002 06:12:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # true 00:19:50.002 06:12:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:19:50.002 Cannot find device "nvmf_tgt_br" 00:19:50.002 06:12:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@164 -- # true 00:19:50.002 06:12:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:19:50.002 Cannot find device "nvmf_tgt_br2" 00:19:50.002 06:12:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@165 -- # true 00:19:50.002 06:12:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:19:50.002 Cannot find device "nvmf_init_br" 00:19:50.002 06:12:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # true 00:19:50.002 06:12:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:19:50.002 Cannot find device "nvmf_init_br2" 00:19:50.002 06:12:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@167 -- # true 00:19:50.002 06:12:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:19:50.002 Cannot find device "nvmf_tgt_br" 00:19:50.002 06:12:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@168 -- # true 00:19:50.002 06:12:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:19:50.002 Cannot find device "nvmf_tgt_br2" 00:19:50.002 06:12:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # true 00:19:50.002 06:12:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:19:50.002 Cannot find device "nvmf_br" 00:19:50.002 06:12:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # true 00:19:50.002 06:12:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:19:50.002 Cannot find device "nvmf_init_if" 00:19:50.002 06:12:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # true 00:19:50.002 06:12:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:19:50.002 Cannot find device "nvmf_init_if2" 00:19:50.002 06:12:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@172 -- # true 00:19:50.002 06:12:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:50.002 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:50.002 06:12:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@173 -- # true 00:19:50.002 06:12:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:50.002 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:50.002 06:12:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@174 -- # true 00:19:50.002 06:12:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:19:50.002 06:12:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:50.002 06:12:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:19:50.002 06:12:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:50.002 06:12:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:50.002 06:12:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:50.002 06:12:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:50.002 06:12:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:50.002 06:12:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:19:50.002 06:12:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:19:50.002 06:12:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:19:50.002 06:12:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:19:50.002 06:12:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:19:50.002 06:12:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:19:50.002 06:12:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:19:50.002 06:12:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:19:50.002 06:12:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:19:50.002 06:12:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:50.002 06:12:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:50.002 06:12:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:50.002 06:12:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:19:50.002 06:12:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:19:50.262 06:12:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:19:50.262 06:12:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:19:50.262 06:12:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:50.262 06:12:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:50.262 06:12:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:50.262 06:12:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:19:50.262 06:12:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:19:50.262 06:12:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:19:50.262 06:12:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:50.262 06:12:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:19:50.262 06:12:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:19:50.262 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:50.262 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.066 ms 00:19:50.262 00:19:50.262 --- 10.0.0.3 ping statistics --- 00:19:50.262 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:50.262 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:19:50.262 06:12:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:19:50.262 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:19:50.262 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.055 ms 00:19:50.262 00:19:50.262 --- 10.0.0.4 ping statistics --- 00:19:50.262 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:50.262 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:19:50.262 06:12:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:50.262 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:50.262 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:19:50.262 00:19:50.262 --- 10.0.0.1 ping statistics --- 00:19:50.262 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:50.262 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:19:50.262 06:12:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:19:50.262 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:50.262 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.047 ms 00:19:50.262 00:19:50.262 --- 10.0.0.2 ping statistics --- 00:19:50.262 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:50.262 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:19:50.262 06:12:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:50.262 06:12:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@457 -- # return 0 00:19:50.262 06:12:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:19:50.262 06:12:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:50.262 06:12:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:19:50.262 06:12:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:19:50.262 06:12:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:50.262 06:12:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:19:50.262 06:12:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:19:50.262 06:12:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:19:50.262 06:12:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:19:50.262 06:12:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:50.262 06:12:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:19:50.262 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:50.262 06:12:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@505 -- # nvmfpid=90709 00:19:50.262 06:12:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:19:50.262 06:12:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@506 -- # waitforlisten 90709 00:19:50.262 06:12:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # '[' -z 90709 ']' 00:19:50.262 06:12:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:50.262 06:12:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:50.262 06:12:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:50.262 06:12:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:50.262 06:12:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:19:50.262 [2024-10-01 06:12:15.802614] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:19:50.262 [2024-10-01 06:12:15.802861] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:50.521 [2024-10-01 06:12:15.938941] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:19:50.521 [2024-10-01 06:12:15.971512] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:50.521 [2024-10-01 06:12:15.971794] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:50.521 [2024-10-01 06:12:15.971976] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:50.521 [2024-10-01 06:12:15.972028] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:50.521 [2024-10-01 06:12:15.972124] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:50.521 [2024-10-01 06:12:15.972284] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:19:50.521 [2024-10-01 06:12:15.972291] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:19:50.521 [2024-10-01 06:12:15.999085] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:50.521 06:12:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:50.521 06:12:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # return 0 00:19:50.521 06:12:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:19:50.521 06:12:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:50.521 06:12:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:19:50.521 06:12:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:50.521 06:12:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=90709 00:19:50.521 06:12:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:19:50.780 [2024-10-01 06:12:16.386381] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:51.039 06:12:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:19:51.039 Malloc0 00:19:51.297 06:12:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:19:51.298 06:12:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:51.556 06:12:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:19:51.815 [2024-10-01 06:12:17.373988] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:51.815 06:12:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:19:52.074 [2024-10-01 06:12:17.594087] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:19:52.074 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:52.074 06:12:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=90756 00:19:52.074 06:12:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:19:52.074 06:12:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:52.074 06:12:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 90756 /var/tmp/bdevperf.sock 00:19:52.074 06:12:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # '[' -z 90756 ']' 00:19:52.074 06:12:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:52.074 06:12:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:52.074 06:12:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:52.074 06:12:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:52.074 06:12:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:19:52.333 06:12:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:52.333 06:12:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # return 0 00:19:52.333 06:12:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:19:52.592 06:12:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:19:52.851 Nvme0n1 00:19:52.851 06:12:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:19:53.418 Nvme0n1 00:19:53.418 06:12:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:19:53.418 06:12:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:19:55.322 06:12:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:19:55.322 06:12:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:19:55.581 06:12:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:19:55.840 06:12:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:19:56.777 06:12:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:19:56.777 06:12:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:19:56.777 06:12:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:56.777 06:12:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:19:57.036 06:12:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:57.036 06:12:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:19:57.295 06:12:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:57.295 06:12:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:19:57.554 06:12:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:19:57.554 06:12:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:19:57.554 06:12:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:57.554 06:12:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:19:57.813 06:12:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:57.813 06:12:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:19:57.813 06:12:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:57.813 06:12:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:19:58.072 06:12:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:58.072 06:12:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:19:58.072 06:12:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:58.072 06:12:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:19:58.331 06:12:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:58.331 06:12:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:19:58.331 06:12:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:19:58.331 06:12:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:19:58.590 06:12:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:19:58.590 06:12:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:19:58.590 06:12:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:19:58.849 06:12:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:19:59.108 06:12:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:20:00.046 06:12:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:20:00.046 06:12:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:20:00.046 06:12:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:20:00.046 06:12:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:00.305 06:12:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:20:00.305 06:12:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:20:00.305 06:12:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:00.305 06:12:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:20:00.564 06:12:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:00.564 06:12:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:20:00.564 06:12:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:00.564 06:12:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:20:00.823 06:12:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:00.823 06:12:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:20:00.823 06:12:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:00.823 06:12:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:20:01.082 06:12:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:01.082 06:12:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:20:01.082 06:12:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:01.082 06:12:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:20:01.341 06:12:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:01.341 06:12:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:20:01.341 06:12:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:20:01.341 06:12:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:01.600 06:12:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:01.600 06:12:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:20:01.600 06:12:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:20:01.858 06:12:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n non_optimized 00:20:02.116 06:12:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:20:03.053 06:12:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:20:03.053 06:12:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:20:03.053 06:12:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:20:03.053 06:12:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:03.622 06:12:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:03.622 06:12:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:20:03.622 06:12:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:03.622 06:12:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:20:03.622 06:12:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:20:03.622 06:12:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:20:03.622 06:12:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:20:03.622 06:12:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:03.912 06:12:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:03.912 06:12:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:20:03.912 06:12:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:03.912 06:12:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:20:04.170 06:12:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:04.171 06:12:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:20:04.171 06:12:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:04.171 06:12:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:20:04.429 06:12:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:04.429 06:12:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:20:04.429 06:12:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:04.429 06:12:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:20:04.687 06:12:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:04.687 06:12:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:20:04.687 06:12:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:20:04.945 06:12:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:20:05.253 06:12:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:20:06.188 06:12:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:20:06.188 06:12:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:20:06.188 06:12:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:20:06.188 06:12:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:06.446 06:12:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:06.446 06:12:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:20:06.446 06:12:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:06.446 06:12:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:20:06.704 06:12:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:20:06.704 06:12:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:20:06.704 06:12:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:20:06.704 06:12:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:06.963 06:12:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:06.963 06:12:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:20:06.963 06:12:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:20:06.963 06:12:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:07.531 06:12:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:07.531 06:12:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:20:07.531 06:12:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:20:07.531 06:12:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:07.531 06:12:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:07.531 06:12:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:20:07.531 06:12:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:07.531 06:12:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:20:07.791 06:12:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:20:07.791 06:12:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:20:07.791 06:12:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:20:08.050 06:12:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:20:08.308 06:12:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:20:09.713 06:12:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:20:09.713 06:12:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:20:09.713 06:12:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:09.713 06:12:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:20:09.713 06:12:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:20:09.713 06:12:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:20:09.713 06:12:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:09.713 06:12:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:20:09.972 06:12:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:20:09.972 06:12:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:20:09.972 06:12:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:09.972 06:12:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:20:10.231 06:12:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:10.231 06:12:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:20:10.231 06:12:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:10.231 06:12:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:20:10.490 06:12:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:10.490 06:12:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:20:10.490 06:12:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:10.490 06:12:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:20:10.749 06:12:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:20:10.749 06:12:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:20:10.749 06:12:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:10.749 06:12:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:20:11.007 06:12:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:20:11.007 06:12:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:20:11.007 06:12:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:20:11.266 06:12:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:20:11.524 06:12:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:20:12.459 06:12:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:20:12.459 06:12:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:20:12.459 06:12:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:12.459 06:12:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:20:12.718 06:12:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:20:12.718 06:12:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:20:12.718 06:12:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:12.718 06:12:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:20:12.977 06:12:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:12.977 06:12:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:20:12.977 06:12:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:12.977 06:12:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:20:13.236 06:12:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:13.236 06:12:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:20:13.236 06:12:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:13.236 06:12:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:20:13.495 06:12:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:13.495 06:12:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:20:13.495 06:12:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:13.495 06:12:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:20:13.753 06:12:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:20:13.753 06:12:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:20:13.753 06:12:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:13.753 06:12:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:20:14.012 06:12:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:14.012 06:12:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:20:14.271 06:12:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:20:14.271 06:12:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:20:14.529 06:12:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:20:14.788 06:12:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:20:16.165 06:12:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:20:16.165 06:12:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:20:16.165 06:12:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:16.165 06:12:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:20:16.165 06:12:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:16.165 06:12:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:20:16.165 06:12:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:16.165 06:12:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:20:16.424 06:12:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:16.424 06:12:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:20:16.424 06:12:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:16.424 06:12:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:20:16.684 06:12:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:16.684 06:12:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:20:16.684 06:12:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:20:16.684 06:12:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:16.943 06:12:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:16.943 06:12:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:20:16.943 06:12:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:16.943 06:12:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:20:17.202 06:12:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:17.202 06:12:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:20:17.202 06:12:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:17.202 06:12:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:20:17.462 06:12:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:17.462 06:12:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:20:17.462 06:12:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:20:17.721 06:12:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:20:17.980 06:12:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:20:18.917 06:12:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:20:18.917 06:12:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:20:18.917 06:12:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:18.917 06:12:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:20:19.176 06:12:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:20:19.176 06:12:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:20:19.176 06:12:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:19.176 06:12:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:20:19.176 06:12:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:19.176 06:12:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:20:19.436 06:12:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:19.436 06:12:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:20:19.436 06:12:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:19.436 06:12:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:20:19.436 06:12:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:20:19.436 06:12:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:19.695 06:12:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:19.695 06:12:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:20:19.695 06:12:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:19.695 06:12:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:20:19.954 06:12:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:19.954 06:12:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:20:19.954 06:12:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:20:19.954 06:12:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:20.214 06:12:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:20.214 06:12:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:20:20.214 06:12:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:20:20.473 06:12:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n non_optimized 00:20:20.732 06:12:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:20:21.670 06:12:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:20:21.670 06:12:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:20:21.670 06:12:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:21.670 06:12:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:20:21.929 06:12:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:21.929 06:12:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:20:21.929 06:12:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:21.930 06:12:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:20:22.189 06:12:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:22.189 06:12:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:20:22.189 06:12:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:22.189 06:12:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:20:22.449 06:12:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:22.449 06:12:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:20:22.449 06:12:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:22.449 06:12:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:20:22.708 06:12:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:22.708 06:12:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:20:22.708 06:12:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:22.708 06:12:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:20:22.968 06:12:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:22.968 06:12:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:20:22.968 06:12:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:22.968 06:12:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:20:23.228 06:12:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:23.228 06:12:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:20:23.228 06:12:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:20:23.487 06:12:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:20:23.747 06:12:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:20:24.685 06:12:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:20:24.685 06:12:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:20:24.685 06:12:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:24.685 06:12:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:20:25.252 06:12:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:25.252 06:12:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:20:25.252 06:12:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:25.252 06:12:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:20:25.252 06:12:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:20:25.252 06:12:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:20:25.252 06:12:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:25.252 06:12:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:20:25.511 06:12:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:25.511 06:12:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:20:25.511 06:12:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:25.511 06:12:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:20:25.768 06:12:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:25.768 06:12:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:20:25.768 06:12:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:25.768 06:12:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:20:26.026 06:12:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:20:26.026 06:12:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:20:26.026 06:12:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:20:26.026 06:12:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:20:26.285 06:12:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:20:26.285 06:12:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 90756 00:20:26.285 06:12:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # '[' -z 90756 ']' 00:20:26.285 06:12:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # kill -0 90756 00:20:26.285 06:12:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # uname 00:20:26.285 06:12:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:26.285 06:12:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 90756 00:20:26.285 06:12:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:20:26.285 06:12:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:20:26.285 06:12:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # echo 'killing process with pid 90756' 00:20:26.285 killing process with pid 90756 00:20:26.285 06:12:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@969 -- # kill 90756 00:20:26.285 06:12:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@974 -- # wait 90756 00:20:26.285 { 00:20:26.285 "results": [ 00:20:26.285 { 00:20:26.285 "job": "Nvme0n1", 00:20:26.285 "core_mask": "0x4", 00:20:26.285 "workload": "verify", 00:20:26.285 "status": "terminated", 00:20:26.285 "verify_range": { 00:20:26.285 "start": 0, 00:20:26.285 "length": 16384 00:20:26.285 }, 00:20:26.285 "queue_depth": 128, 00:20:26.285 "io_size": 4096, 00:20:26.285 "runtime": 32.931498, 00:20:26.285 "iops": 9626.589109308055, 00:20:26.285 "mibps": 37.60386370823459, 00:20:26.285 "io_failed": 0, 00:20:26.285 "io_timeout": 0, 00:20:26.285 "avg_latency_us": 13269.222272689993, 00:20:26.285 "min_latency_us": 396.5672727272727, 00:20:26.285 "max_latency_us": 4026531.84 00:20:26.285 } 00:20:26.285 ], 00:20:26.285 "core_count": 1 00:20:26.285 } 00:20:26.547 06:12:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 90756 00:20:26.547 06:12:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:20:26.547 [2024-10-01 06:12:17.656824] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:20:26.547 [2024-10-01 06:12:17.656937] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90756 ] 00:20:26.547 [2024-10-01 06:12:17.785295] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:26.547 [2024-10-01 06:12:17.818051] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:20:26.547 [2024-10-01 06:12:17.845228] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:26.547 [2024-10-01 06:12:18.760072] bdev_nvme.c:5605:nvme_bdev_ctrlr_create: *WARNING*: multipath_config: deprecated feature bdev_nvme_attach_controller.multipath configuration mismatch to be removed in v25.01 00:20:26.547 Running I/O for 90 seconds... 00:20:26.547 7957.00 IOPS, 31.08 MiB/s 8010.00 IOPS, 31.29 MiB/s 8453.33 IOPS, 33.02 MiB/s 8992.00 IOPS, 35.12 MiB/s 9219.20 IOPS, 36.01 MiB/s 9431.17 IOPS, 36.84 MiB/s 9595.86 IOPS, 37.48 MiB/s 9693.38 IOPS, 37.86 MiB/s 9782.89 IOPS, 38.21 MiB/s 9877.40 IOPS, 38.58 MiB/s 9927.73 IOPS, 38.78 MiB/s 9975.08 IOPS, 38.97 MiB/s 10033.00 IOPS, 39.19 MiB/s 10066.07 IOPS, 39.32 MiB/s [2024-10-01 06:12:33.581579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:2920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.547 [2024-10-01 06:12:33.581638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:20:26.547 [2024-10-01 06:12:33.581703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:2928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.547 [2024-10-01 06:12:33.581723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:20:26.547 [2024-10-01 06:12:33.581744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:2936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.547 [2024-10-01 06:12:33.581758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:20:26.547 [2024-10-01 06:12:33.581778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:2944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.547 [2024-10-01 06:12:33.581791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:20:26.547 [2024-10-01 06:12:33.581810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:2952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.547 [2024-10-01 06:12:33.581824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:20:26.547 [2024-10-01 06:12:33.581842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:2960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.547 [2024-10-01 06:12:33.581855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:20:26.547 [2024-10-01 06:12:33.581874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:2968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.547 [2024-10-01 06:12:33.581888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:20:26.547 [2024-10-01 06:12:33.581907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:2976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.547 [2024-10-01 06:12:33.581950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:20:26.547 [2024-10-01 06:12:33.581971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:2472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.547 [2024-10-01 06:12:33.582006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:20:26.547 [2024-10-01 06:12:33.582028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.547 [2024-10-01 06:12:33.582042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:20:26.547 [2024-10-01 06:12:33.582061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:2488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.547 [2024-10-01 06:12:33.582075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:20:26.547 [2024-10-01 06:12:33.582093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:2496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.547 [2024-10-01 06:12:33.582107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:26.547 [2024-10-01 06:12:33.582127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:2504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.547 [2024-10-01 06:12:33.582140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:26.547 [2024-10-01 06:12:33.582159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.547 [2024-10-01 06:12:33.582172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:20:26.547 [2024-10-01 06:12:33.582192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:2520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.547 [2024-10-01 06:12:33.582205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:20:26.547 [2024-10-01 06:12:33.582224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:2528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.547 [2024-10-01 06:12:33.582239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:20:26.547 [2024-10-01 06:12:33.582261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:2536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.547 [2024-10-01 06:12:33.582275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:20:26.547 [2024-10-01 06:12:33.582312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:2544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.547 [2024-10-01 06:12:33.582326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:20:26.547 [2024-10-01 06:12:33.582360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:2552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.547 [2024-10-01 06:12:33.582374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:20:26.547 [2024-10-01 06:12:33.582393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:2560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.547 [2024-10-01 06:12:33.582407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:20:26.547 [2024-10-01 06:12:33.582426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.547 [2024-10-01 06:12:33.582449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:20:26.547 [2024-10-01 06:12:33.582470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:2576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.547 [2024-10-01 06:12:33.582485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:20:26.547 [2024-10-01 06:12:33.582505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:2584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.547 [2024-10-01 06:12:33.582520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:20:26.547 [2024-10-01 06:12:33.582540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:2592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.547 [2024-10-01 06:12:33.582555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:20:26.547 [2024-10-01 06:12:33.582591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:2984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.547 [2024-10-01 06:12:33.582611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:20:26.547 [2024-10-01 06:12:33.582632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:2992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.547 [2024-10-01 06:12:33.582647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:20:26.547 [2024-10-01 06:12:33.582667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:3000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.547 [2024-10-01 06:12:33.582681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:20:26.547 [2024-10-01 06:12:33.582702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:3008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.547 [2024-10-01 06:12:33.582716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:20:26.547 [2024-10-01 06:12:33.582736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:3016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.547 [2024-10-01 06:12:33.582751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:20:26.547 [2024-10-01 06:12:33.582770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:3024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.547 [2024-10-01 06:12:33.582785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:20:26.547 [2024-10-01 06:12:33.582805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:3032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.547 [2024-10-01 06:12:33.582820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:20:26.547 [2024-10-01 06:12:33.582839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:3040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.547 [2024-10-01 06:12:33.582854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:20:26.547 [2024-10-01 06:12:33.582875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:2600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.547 [2024-10-01 06:12:33.582889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:20:26.547 [2024-10-01 06:12:33.582918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:2608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.547 [2024-10-01 06:12:33.582947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:20:26.547 [2024-10-01 06:12:33.582968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:2616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.547 [2024-10-01 06:12:33.582983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:20:26.547 [2024-10-01 06:12:33.583003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:2624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.547 [2024-10-01 06:12:33.583018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:20:26.547 [2024-10-01 06:12:33.583038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.547 [2024-10-01 06:12:33.583053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:20:26.547 [2024-10-01 06:12:33.583072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:2640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.547 [2024-10-01 06:12:33.583086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:20:26.547 [2024-10-01 06:12:33.583107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:2648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.547 [2024-10-01 06:12:33.583121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:20:26.547 [2024-10-01 06:12:33.583140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:2656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.547 [2024-10-01 06:12:33.583155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:20:26.547 [2024-10-01 06:12:33.583174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:3048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.547 [2024-10-01 06:12:33.583189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:20:26.547 [2024-10-01 06:12:33.583209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:3056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.547 [2024-10-01 06:12:33.583224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:20:26.547 [2024-10-01 06:12:33.583244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:3064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.547 [2024-10-01 06:12:33.583259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:20:26.547 [2024-10-01 06:12:33.583279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:3072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.547 [2024-10-01 06:12:33.583293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:20:26.548 [2024-10-01 06:12:33.583312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:3080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.548 [2024-10-01 06:12:33.583327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:20:26.548 [2024-10-01 06:12:33.583355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:3088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.548 [2024-10-01 06:12:33.583370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:20:26.548 [2024-10-01 06:12:33.583390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:3096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.548 [2024-10-01 06:12:33.583405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:20:26.548 [2024-10-01 06:12:33.583424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:3104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.548 [2024-10-01 06:12:33.583439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:20:26.548 [2024-10-01 06:12:33.583459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:3112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.548 [2024-10-01 06:12:33.583474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:20:26.548 [2024-10-01 06:12:33.583493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:3120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.548 [2024-10-01 06:12:33.583507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:20:26.548 [2024-10-01 06:12:33.583527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:3128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.548 [2024-10-01 06:12:33.583542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:20:26.548 [2024-10-01 06:12:33.583561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:3136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.548 [2024-10-01 06:12:33.583575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:20:26.548 [2024-10-01 06:12:33.583595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:3144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.548 [2024-10-01 06:12:33.583610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:20:26.548 [2024-10-01 06:12:33.583629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:3152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.548 [2024-10-01 06:12:33.583644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:20:26.548 [2024-10-01 06:12:33.583663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:3160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.548 [2024-10-01 06:12:33.583677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:20:26.548 [2024-10-01 06:12:33.583697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:3168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.548 [2024-10-01 06:12:33.583712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:20:26.548 [2024-10-01 06:12:33.583746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:3176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.548 [2024-10-01 06:12:33.583765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:20:26.548 [2024-10-01 06:12:33.583791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:3184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.548 [2024-10-01 06:12:33.583815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:20:26.548 [2024-10-01 06:12:33.583836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:3192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.548 [2024-10-01 06:12:33.583893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:20:26.548 [2024-10-01 06:12:33.583915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:3200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.548 [2024-10-01 06:12:33.583953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:20:26.548 [2024-10-01 06:12:33.583976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:3208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.548 [2024-10-01 06:12:33.583991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:20:26.548 [2024-10-01 06:12:33.584030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:3216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.548 [2024-10-01 06:12:33.584046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:20:26.548 [2024-10-01 06:12:33.584067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.548 [2024-10-01 06:12:33.584083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:20:26.548 [2024-10-01 06:12:33.584104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:3232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.548 [2024-10-01 06:12:33.584120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:20:26.548 [2024-10-01 06:12:33.584142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:2664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.548 [2024-10-01 06:12:33.584157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:20:26.548 [2024-10-01 06:12:33.584179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:2672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.548 [2024-10-01 06:12:33.584194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:20:26.548 [2024-10-01 06:12:33.584216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:2680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.548 [2024-10-01 06:12:33.584231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:20:26.548 [2024-10-01 06:12:33.584252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:2688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.548 [2024-10-01 06:12:33.584268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:20:26.548 [2024-10-01 06:12:33.584289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:2696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.548 [2024-10-01 06:12:33.584334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:20:26.548 [2024-10-01 06:12:33.584354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.548 [2024-10-01 06:12:33.584393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:20:26.548 [2024-10-01 06:12:33.584414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:2712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.548 [2024-10-01 06:12:33.584429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:20:26.548 [2024-10-01 06:12:33.584448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:2720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.548 [2024-10-01 06:12:33.584463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:20:26.548 [2024-10-01 06:12:33.584482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:2728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.548 [2024-10-01 06:12:33.584497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:20:26.548 [2024-10-01 06:12:33.584518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:2736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.548 [2024-10-01 06:12:33.584533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:20:26.548 [2024-10-01 06:12:33.584553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.548 [2024-10-01 06:12:33.584567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:26.548 [2024-10-01 06:12:33.584587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:2752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.548 [2024-10-01 06:12:33.584601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:20:26.548 [2024-10-01 06:12:33.584621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:2760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.548 [2024-10-01 06:12:33.584635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:20:26.548 [2024-10-01 06:12:33.584655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:2768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.548 [2024-10-01 06:12:33.584669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:20:26.548 [2024-10-01 06:12:33.584689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:2776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.548 [2024-10-01 06:12:33.584703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:20:26.548 [2024-10-01 06:12:33.584723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.548 [2024-10-01 06:12:33.584738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:20:26.548 [2024-10-01 06:12:33.584757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:3240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.548 [2024-10-01 06:12:33.584772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:20:26.548 [2024-10-01 06:12:33.584791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:3248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.548 [2024-10-01 06:12:33.584806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:20:26.548 [2024-10-01 06:12:33.584832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:3256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.548 [2024-10-01 06:12:33.584848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:20:26.548 [2024-10-01 06:12:33.584867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:3264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.549 [2024-10-01 06:12:33.584882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:20:26.549 [2024-10-01 06:12:33.584901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:3272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.549 [2024-10-01 06:12:33.584915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:20:26.549 [2024-10-01 06:12:33.584935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:3280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.549 [2024-10-01 06:12:33.584949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:20:26.549 [2024-10-01 06:12:33.584980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:3288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.549 [2024-10-01 06:12:33.585013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:20:26.549 [2024-10-01 06:12:33.585033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.549 [2024-10-01 06:12:33.585048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:20:26.549 [2024-10-01 06:12:33.585068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:2792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.549 [2024-10-01 06:12:33.585083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:20:26.549 [2024-10-01 06:12:33.585105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:2800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.549 [2024-10-01 06:12:33.585120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:20:26.549 [2024-10-01 06:12:33.585140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:2808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.549 [2024-10-01 06:12:33.585155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:20:26.549 [2024-10-01 06:12:33.585175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:2816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.549 [2024-10-01 06:12:33.585190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:20:26.549 [2024-10-01 06:12:33.585210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:2824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.549 [2024-10-01 06:12:33.585225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:20:26.549 [2024-10-01 06:12:33.585245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:2832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.549 [2024-10-01 06:12:33.585276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:20:26.549 [2024-10-01 06:12:33.585305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:2840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.549 [2024-10-01 06:12:33.585321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:20:26.549 [2024-10-01 06:12:33.585359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:2848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.549 [2024-10-01 06:12:33.585373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:20:26.549 [2024-10-01 06:12:33.585393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:2856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.549 [2024-10-01 06:12:33.585408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:20:26.549 [2024-10-01 06:12:33.585428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:2864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.549 [2024-10-01 06:12:33.585443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:20:26.549 [2024-10-01 06:12:33.585463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:2872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.549 [2024-10-01 06:12:33.585478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:20:26.549 [2024-10-01 06:12:33.585498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:2880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.549 [2024-10-01 06:12:33.585512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:20:26.549 [2024-10-01 06:12:33.585532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:2888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.549 [2024-10-01 06:12:33.585547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:20:26.549 [2024-10-01 06:12:33.585567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:2896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.549 [2024-10-01 06:12:33.585581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:20:26.549 [2024-10-01 06:12:33.585602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:2904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.549 [2024-10-01 06:12:33.585617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:20:26.549 [2024-10-01 06:12:33.586299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:2912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.549 [2024-10-01 06:12:33.586325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:20:26.549 [2024-10-01 06:12:33.586373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:3304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.549 [2024-10-01 06:12:33.586390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:20:26.549 [2024-10-01 06:12:33.586418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:3312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.549 [2024-10-01 06:12:33.586434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:20:26.549 [2024-10-01 06:12:33.586460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:3320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.549 [2024-10-01 06:12:33.586501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:20:26.549 [2024-10-01 06:12:33.586531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:3328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.549 [2024-10-01 06:12:33.586548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:26.549 [2024-10-01 06:12:33.586573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:3336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.549 [2024-10-01 06:12:33.586588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:26.549 [2024-10-01 06:12:33.586613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:3344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.549 [2024-10-01 06:12:33.586627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:20:26.549 [2024-10-01 06:12:33.586653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:3352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.549 [2024-10-01 06:12:33.586668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:20:26.549 [2024-10-01 06:12:33.586707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:3360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.549 [2024-10-01 06:12:33.586726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:20:26.549 [2024-10-01 06:12:33.586752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:3368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.549 [2024-10-01 06:12:33.586767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:26.549 [2024-10-01 06:12:33.586793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:3376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.549 [2024-10-01 06:12:33.586808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:20:26.549 [2024-10-01 06:12:33.586833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:3384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.549 [2024-10-01 06:12:33.586848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:20:26.549 [2024-10-01 06:12:33.586873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:3392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.549 [2024-10-01 06:12:33.586887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:20:26.549 [2024-10-01 06:12:33.586913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:3400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.549 [2024-10-01 06:12:33.586928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:20:26.549 [2024-10-01 06:12:33.586964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:3408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.549 [2024-10-01 06:12:33.586983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:20:26.549 [2024-10-01 06:12:33.587009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.549 [2024-10-01 06:12:33.587050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:20:26.549 [2024-10-01 06:12:33.587091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:3424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.549 [2024-10-01 06:12:33.587111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:20:26.549 [2024-10-01 06:12:33.587138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:3432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.549 [2024-10-01 06:12:33.587154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:20:26.549 [2024-10-01 06:12:33.587180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:3440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.549 [2024-10-01 06:12:33.587196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:20:26.550 [2024-10-01 06:12:33.587222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:3448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.550 [2024-10-01 06:12:33.587237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:20:26.550 [2024-10-01 06:12:33.587265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:3456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.550 [2024-10-01 06:12:33.587281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:20:26.550 [2024-10-01 06:12:33.587307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:3464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.550 [2024-10-01 06:12:33.587322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:20:26.550 [2024-10-01 06:12:33.587348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:3472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.550 [2024-10-01 06:12:33.587378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:20:26.550 [2024-10-01 06:12:33.587404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:3480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.550 [2024-10-01 06:12:33.587418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:20:26.550 [2024-10-01 06:12:33.587444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:3488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.550 [2024-10-01 06:12:33.587459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:20:26.550 9851.00 IOPS, 38.48 MiB/s 9235.31 IOPS, 36.08 MiB/s 8692.06 IOPS, 33.95 MiB/s 8209.17 IOPS, 32.07 MiB/s 7964.89 IOPS, 31.11 MiB/s 8088.65 IOPS, 31.60 MiB/s 8199.48 IOPS, 32.03 MiB/s 8407.32 IOPS, 32.84 MiB/s 8651.00 IOPS, 33.79 MiB/s 8853.08 IOPS, 34.58 MiB/s 8937.88 IOPS, 34.91 MiB/s 8999.35 IOPS, 35.15 MiB/s 9048.26 IOPS, 35.34 MiB/s 9186.79 IOPS, 35.89 MiB/s 9348.59 IOPS, 36.52 MiB/s 9505.37 IOPS, 37.13 MiB/s [2024-10-01 06:12:49.260351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:90952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.550 [2024-10-01 06:12:49.260410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:20:26.550 [2024-10-01 06:12:49.260474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:90968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.550 [2024-10-01 06:12:49.260516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:20:26.550 [2024-10-01 06:12:49.260538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:90984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.550 [2024-10-01 06:12:49.260552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:20:26.550 [2024-10-01 06:12:49.260571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:90584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.550 [2024-10-01 06:12:49.260584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:20:26.550 [2024-10-01 06:12:49.260602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:90616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.550 [2024-10-01 06:12:49.260615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:20:26.550 [2024-10-01 06:12:49.260634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:90648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.550 [2024-10-01 06:12:49.260647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:20:26.550 [2024-10-01 06:12:49.260665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:91008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.550 [2024-10-01 06:12:49.260678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:20:26.550 [2024-10-01 06:12:49.260697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:91024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.550 [2024-10-01 06:12:49.260710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:20:26.550 [2024-10-01 06:12:49.260728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:91040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.550 [2024-10-01 06:12:49.260741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:20:26.550 [2024-10-01 06:12:49.260759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:91056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.550 [2024-10-01 06:12:49.260773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:20:26.550 [2024-10-01 06:12:49.260791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.550 [2024-10-01 06:12:49.260804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:20:26.550 [2024-10-01 06:12:49.260823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:90576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.550 [2024-10-01 06:12:49.260836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:20:26.550 [2024-10-01 06:12:49.260854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:90608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.550 [2024-10-01 06:12:49.260867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:20:26.550 [2024-10-01 06:12:49.260885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:90640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.550 [2024-10-01 06:12:49.260905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:20:26.550 [2024-10-01 06:12:49.260966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:90672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.550 [2024-10-01 06:12:49.260982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:20:26.550 [2024-10-01 06:12:49.261020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:91088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.550 [2024-10-01 06:12:49.261040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:20:26.550 [2024-10-01 06:12:49.261060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:91104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.550 [2024-10-01 06:12:49.261075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:20:26.550 [2024-10-01 06:12:49.261096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:91120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.550 [2024-10-01 06:12:49.261111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:20:26.550 [2024-10-01 06:12:49.261130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:91136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.550 [2024-10-01 06:12:49.261144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:20:26.550 [2024-10-01 06:12:49.261163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:91152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.550 [2024-10-01 06:12:49.261177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:20:26.550 [2024-10-01 06:12:49.261197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:91168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.550 [2024-10-01 06:12:49.261211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:20:26.550 [2024-10-01 06:12:49.261230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:90704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.550 [2024-10-01 06:12:49.261244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:20:26.550 [2024-10-01 06:12:49.261263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:90736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.550 [2024-10-01 06:12:49.261277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:20:26.550 [2024-10-01 06:12:49.261296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:90760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.550 [2024-10-01 06:12:49.261325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:20:26.550 [2024-10-01 06:12:49.261343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:90800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.550 [2024-10-01 06:12:49.261357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:20:26.550 [2024-10-01 06:12:49.261376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:90680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.550 [2024-10-01 06:12:49.261390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:20:26.550 [2024-10-01 06:12:49.261419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:91184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.550 [2024-10-01 06:12:49.261433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:20:26.550 [2024-10-01 06:12:49.261452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:91200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.550 [2024-10-01 06:12:49.261465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:20:26.550 [2024-10-01 06:12:49.261484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:91216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.550 [2024-10-01 06:12:49.261498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:20:26.550 [2024-10-01 06:12:49.261516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:91232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.550 [2024-10-01 06:12:49.261530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:20:26.550 [2024-10-01 06:12:49.261548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:91248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.550 [2024-10-01 06:12:49.261562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:20:26.550 [2024-10-01 06:12:49.261580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:91264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.551 [2024-10-01 06:12:49.261594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:20:26.551 [2024-10-01 06:12:49.261612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:91280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.551 [2024-10-01 06:12:49.261627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:20:26.551 [2024-10-01 06:12:49.261647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:90728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.551 [2024-10-01 06:12:49.261661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:20:26.551 [2024-10-01 06:12:49.261680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:90768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.551 [2024-10-01 06:12:49.261693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:20:26.551 [2024-10-01 06:12:49.261712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:91288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.551 [2024-10-01 06:12:49.261725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:20:26.551 [2024-10-01 06:12:49.261744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:91304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.551 [2024-10-01 06:12:49.261757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:20:26.551 [2024-10-01 06:12:49.261776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:91320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.551 [2024-10-01 06:12:49.261790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:20:26.551 [2024-10-01 06:12:49.261816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:91336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.551 [2024-10-01 06:12:49.261831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:20:26.551 [2024-10-01 06:12:49.261849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:91352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.551 [2024-10-01 06:12:49.261863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:20:26.551 [2024-10-01 06:12:49.261881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:91368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.551 [2024-10-01 06:12:49.261895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:20:26.551 [2024-10-01 06:12:49.261929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:90848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.551 [2024-10-01 06:12:49.261955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:20:26.551 [2024-10-01 06:12:49.261978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:90880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.551 [2024-10-01 06:12:49.261992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:20:26.551 [2024-10-01 06:12:49.262991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:90792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.551 [2024-10-01 06:12:49.263017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:20:26.551 [2024-10-01 06:12:49.263042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:90888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.551 [2024-10-01 06:12:49.263058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:20:26.551 [2024-10-01 06:12:49.263077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:90920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.551 [2024-10-01 06:12:49.263091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:20:26.551 [2024-10-01 06:12:49.263110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:91376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.551 [2024-10-01 06:12:49.263124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:20:26.551 [2024-10-01 06:12:49.263143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:91392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.551 [2024-10-01 06:12:49.263157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:20:26.551 [2024-10-01 06:12:49.263176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:91408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.551 [2024-10-01 06:12:49.263191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:20:26.551 [2024-10-01 06:12:49.263210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:91424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:20:26.551 [2024-10-01 06:12:49.263224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:20:26.551 9581.74 IOPS, 37.43 MiB/s 9606.81 IOPS, 37.53 MiB/s Received shutdown signal, test time was about 32.932221 seconds 00:20:26.551 00:20:26.551 Latency(us) 00:20:26.551 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:26.551 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:26.551 Verification LBA range: start 0x0 length 0x4000 00:20:26.551 Nvme0n1 : 32.93 9626.59 37.60 0.00 0.00 13269.22 396.57 4026531.84 00:20:26.551 =================================================================================================================== 00:20:26.551 Total : 9626.59 37.60 0.00 0.00 13269.22 396.57 4026531.84 00:20:26.551 [2024-10-01 06:12:51.860618] app.c:1032:log_deprecation_hits: *WARNING*: multipath_config: deprecation 'bdev_nvme_attach_controller.multipath configuration mismatch' scheduled for removal in v25.01 hit 1 times 00:20:26.551 06:12:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:26.810 06:12:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:20:26.810 06:12:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:20:26.810 06:12:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:20:26.810 06:12:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # nvmfcleanup 00:20:26.810 06:12:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:20:26.810 06:12:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:26.810 06:12:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:20:26.810 06:12:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:26.810 06:12:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:26.810 rmmod nvme_tcp 00:20:26.810 rmmod nvme_fabrics 00:20:26.810 rmmod nvme_keyring 00:20:26.810 06:12:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:26.810 06:12:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:20:26.810 06:12:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:20:26.810 06:12:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@513 -- # '[' -n 90709 ']' 00:20:26.810 06:12:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@514 -- # killprocess 90709 00:20:26.810 06:12:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # '[' -z 90709 ']' 00:20:26.810 06:12:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # kill -0 90709 00:20:26.810 06:12:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # uname 00:20:26.810 06:12:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:26.810 06:12:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 90709 00:20:26.810 killing process with pid 90709 00:20:26.810 06:12:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:26.810 06:12:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:26.810 06:12:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # echo 'killing process with pid 90709' 00:20:26.810 06:12:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@969 -- # kill 90709 00:20:26.810 06:12:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@974 -- # wait 90709 00:20:27.069 06:12:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:20:27.069 06:12:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:20:27.069 06:12:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:20:27.069 06:12:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:20:27.069 06:12:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@787 -- # iptables-save 00:20:27.069 06:12:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:20:27.069 06:12:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@787 -- # iptables-restore 00:20:27.069 06:12:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:27.069 06:12:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:20:27.069 06:12:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:20:27.069 06:12:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:20:27.069 06:12:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:20:27.069 06:12:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:20:27.069 06:12:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:20:27.069 06:12:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:20:27.069 06:12:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:20:27.069 06:12:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:20:27.069 06:12:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:20:27.069 06:12:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:20:27.069 06:12:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:20:27.069 06:12:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:27.328 06:12:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:27.328 06:12:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@246 -- # remove_spdk_ns 00:20:27.328 06:12:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:27.328 06:12:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:27.328 06:12:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:27.328 06:12:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@300 -- # return 0 00:20:27.328 ************************************ 00:20:27.328 END TEST nvmf_host_multipath_status 00:20:27.328 ************************************ 00:20:27.328 00:20:27.328 real 0m37.598s 00:20:27.328 user 2m2.186s 00:20:27.328 sys 0m10.800s 00:20:27.328 06:12:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:27.328 06:12:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:20:27.328 06:12:52 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:20:27.328 06:12:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:20:27.328 06:12:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:27.328 06:12:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:27.328 ************************************ 00:20:27.328 START TEST nvmf_discovery_remove_ifc 00:20:27.328 ************************************ 00:20:27.328 06:12:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:20:27.328 * Looking for test storage... 00:20:27.328 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:27.328 06:12:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:20:27.328 06:12:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1681 -- # lcov --version 00:20:27.328 06:12:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:20:27.588 06:12:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:20:27.588 06:12:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:27.588 06:12:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:27.588 06:12:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:27.588 06:12:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:20:27.588 06:12:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:20:27.588 06:12:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:20:27.588 06:12:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:20:27.588 06:12:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:20:27.588 06:12:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:20:27.588 06:12:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:20:27.588 06:12:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:27.588 06:12:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:20:27.588 06:12:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:20:27.588 06:12:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:27.588 06:12:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:27.588 06:12:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:20:27.589 06:12:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:20:27.589 06:12:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:27.589 06:12:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:20:27.589 06:12:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:20:27.589 06:12:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:20:27.589 06:12:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:20:27.589 06:12:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:27.589 06:12:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:20:27.589 06:12:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:20:27.589 06:12:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:27.589 06:12:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:27.589 06:12:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:20:27.589 06:12:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:27.589 06:12:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:20:27.589 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:27.589 --rc genhtml_branch_coverage=1 00:20:27.589 --rc genhtml_function_coverage=1 00:20:27.589 --rc genhtml_legend=1 00:20:27.589 --rc geninfo_all_blocks=1 00:20:27.589 --rc geninfo_unexecuted_blocks=1 00:20:27.589 00:20:27.589 ' 00:20:27.589 06:12:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:20:27.589 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:27.589 --rc genhtml_branch_coverage=1 00:20:27.589 --rc genhtml_function_coverage=1 00:20:27.589 --rc genhtml_legend=1 00:20:27.589 --rc geninfo_all_blocks=1 00:20:27.589 --rc geninfo_unexecuted_blocks=1 00:20:27.589 00:20:27.589 ' 00:20:27.589 06:12:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:20:27.589 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:27.589 --rc genhtml_branch_coverage=1 00:20:27.589 --rc genhtml_function_coverage=1 00:20:27.589 --rc genhtml_legend=1 00:20:27.589 --rc geninfo_all_blocks=1 00:20:27.589 --rc geninfo_unexecuted_blocks=1 00:20:27.589 00:20:27.589 ' 00:20:27.589 06:12:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:20:27.589 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:27.589 --rc genhtml_branch_coverage=1 00:20:27.589 --rc genhtml_function_coverage=1 00:20:27.589 --rc genhtml_legend=1 00:20:27.589 --rc geninfo_all_blocks=1 00:20:27.589 --rc geninfo_unexecuted_blocks=1 00:20:27.589 00:20:27.589 ' 00:20:27.589 06:12:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:27.589 06:12:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:20:27.589 06:12:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:27.589 06:12:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:27.589 06:12:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:27.589 06:12:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:27.589 06:12:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:27.589 06:12:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:27.589 06:12:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:27.589 06:12:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:27.589 06:12:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:27.589 06:12:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:27.589 06:12:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 00:20:27.589 06:12:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=a979a798-a221-4879-b3c4-5aaa753fde06 00:20:27.589 06:12:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:27.589 06:12:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:27.589 06:12:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:27.589 06:12:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:27.589 06:12:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:27.589 06:12:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:20:27.589 06:12:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:27.589 06:12:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:27.589 06:12:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:27.589 06:12:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:27.589 06:12:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:27.589 06:12:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:27.589 06:12:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:20:27.589 06:12:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:27.589 06:12:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:20:27.589 06:12:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:27.589 06:12:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:27.589 06:12:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:27.589 06:12:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:27.589 06:12:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:27.589 06:12:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:27.589 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:27.589 06:12:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:27.589 06:12:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:27.589 06:12:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:27.589 06:12:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:20:27.589 06:12:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:20:27.589 06:12:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:20:27.589 06:12:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:20:27.589 06:12:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:20:27.589 06:12:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:20:27.589 06:12:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:20:27.589 06:12:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:20:27.589 06:12:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:27.589 06:12:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@472 -- # prepare_net_devs 00:20:27.589 06:12:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@434 -- # local -g is_hw=no 00:20:27.589 06:12:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@436 -- # remove_spdk_ns 00:20:27.589 06:12:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:27.589 06:12:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:27.589 06:12:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:27.589 06:12:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:20:27.589 06:12:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:20:27.589 06:12:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:20:27.589 06:12:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:20:27.589 06:12:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:20:27.589 06:12:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@456 -- # nvmf_veth_init 00:20:27.590 06:12:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:27.590 06:12:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:20:27.590 06:12:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:20:27.590 06:12:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:20:27.590 06:12:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:27.590 06:12:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:20:27.590 06:12:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:27.590 06:12:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:20:27.590 06:12:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:27.590 06:12:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:20:27.590 06:12:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:27.590 06:12:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:27.590 06:12:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:27.590 06:12:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:27.590 06:12:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:27.590 06:12:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:27.590 06:12:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:20:27.590 Cannot find device "nvmf_init_br" 00:20:27.590 06:12:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # true 00:20:27.590 06:12:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:20:27.590 Cannot find device "nvmf_init_br2" 00:20:27.590 06:12:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # true 00:20:27.590 06:12:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:20:27.590 Cannot find device "nvmf_tgt_br" 00:20:27.590 06:12:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@164 -- # true 00:20:27.590 06:12:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:20:27.590 Cannot find device "nvmf_tgt_br2" 00:20:27.590 06:12:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@165 -- # true 00:20:27.590 06:12:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:20:27.590 Cannot find device "nvmf_init_br" 00:20:27.590 06:12:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # true 00:20:27.590 06:12:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:20:27.590 Cannot find device "nvmf_init_br2" 00:20:27.590 06:12:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@167 -- # true 00:20:27.590 06:12:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:20:27.590 Cannot find device "nvmf_tgt_br" 00:20:27.590 06:12:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@168 -- # true 00:20:27.590 06:12:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:20:27.590 Cannot find device "nvmf_tgt_br2" 00:20:27.590 06:12:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # true 00:20:27.590 06:12:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:20:27.590 Cannot find device "nvmf_br" 00:20:27.590 06:12:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # true 00:20:27.590 06:12:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:20:27.590 Cannot find device "nvmf_init_if" 00:20:27.590 06:12:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # true 00:20:27.590 06:12:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:20:27.590 Cannot find device "nvmf_init_if2" 00:20:27.590 06:12:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@172 -- # true 00:20:27.590 06:12:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:27.590 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:27.590 06:12:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@173 -- # true 00:20:27.590 06:12:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:27.590 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:27.590 06:12:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@174 -- # true 00:20:27.590 06:12:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:20:27.590 06:12:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:27.590 06:12:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:20:27.590 06:12:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:27.590 06:12:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:27.590 06:12:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:27.849 06:12:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:27.849 06:12:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:27.849 06:12:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:20:27.849 06:12:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:20:27.849 06:12:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:20:27.849 06:12:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:20:27.850 06:12:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:20:27.850 06:12:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:20:27.850 06:12:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:20:27.850 06:12:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:20:27.850 06:12:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:20:27.850 06:12:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:27.850 06:12:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:27.850 06:12:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:27.850 06:12:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:20:27.850 06:12:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:20:27.850 06:12:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:20:27.850 06:12:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:20:27.850 06:12:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:27.850 06:12:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:27.850 06:12:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:27.850 06:12:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:20:27.850 06:12:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:20:27.850 06:12:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:20:27.850 06:12:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:27.850 06:12:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:20:27.850 06:12:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:20:27.850 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:27.850 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.054 ms 00:20:27.850 00:20:27.850 --- 10.0.0.3 ping statistics --- 00:20:27.850 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:27.850 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:20:27.850 06:12:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:20:27.850 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:20:27.850 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.048 ms 00:20:27.850 00:20:27.850 --- 10.0.0.4 ping statistics --- 00:20:27.850 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:27.850 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:20:27.850 06:12:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:27.850 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:27.850 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:20:27.850 00:20:27.850 --- 10.0.0.1 ping statistics --- 00:20:27.850 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:27.850 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:20:27.850 06:12:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:20:27.850 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:27.850 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.065 ms 00:20:27.850 00:20:27.850 --- 10.0.0.2 ping statistics --- 00:20:27.850 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:27.850 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:20:27.850 06:12:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:27.850 06:12:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@457 -- # return 0 00:20:27.850 06:12:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:20:27.850 06:12:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:27.850 06:12:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:20:27.850 06:12:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:20:27.850 06:12:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:27.850 06:12:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:20:27.850 06:12:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:20:27.850 06:12:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:20:27.850 06:12:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:20:27.850 06:12:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:27.850 06:12:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:27.850 06:12:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@505 -- # nvmfpid=91576 00:20:27.850 06:12:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@506 -- # waitforlisten 91576 00:20:27.850 06:12:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:27.850 06:12:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # '[' -z 91576 ']' 00:20:27.850 06:12:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:27.850 06:12:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:27.850 06:12:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:27.850 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:27.850 06:12:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:27.850 06:12:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:28.109 [2024-10-01 06:12:53.469306] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:20:28.109 [2024-10-01 06:12:53.469398] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:28.109 [2024-10-01 06:12:53.608334] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:28.109 [2024-10-01 06:12:53.640582] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:28.109 [2024-10-01 06:12:53.640650] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:28.109 [2024-10-01 06:12:53.640659] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:28.109 [2024-10-01 06:12:53.640683] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:28.109 [2024-10-01 06:12:53.640690] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:28.109 [2024-10-01 06:12:53.640714] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:20:28.109 [2024-10-01 06:12:53.667721] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:28.109 06:12:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:28.109 06:12:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # return 0 00:20:28.109 06:12:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:20:28.109 06:12:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:28.109 06:12:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:28.368 06:12:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:28.368 06:12:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:20:28.368 06:12:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:28.368 06:12:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:28.368 [2024-10-01 06:12:53.770368] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:28.368 [2024-10-01 06:12:53.778443] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:20:28.368 null0 00:20:28.368 [2024-10-01 06:12:53.810357] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:28.368 06:12:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:28.368 06:12:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=91595 00:20:28.368 06:12:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:20:28.368 06:12:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 91595 /tmp/host.sock 00:20:28.368 06:12:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # '[' -z 91595 ']' 00:20:28.368 06:12:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # local rpc_addr=/tmp/host.sock 00:20:28.368 06:12:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:28.368 06:12:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:20:28.368 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:20:28.368 06:12:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:28.368 06:12:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:28.368 [2024-10-01 06:12:53.887946] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:20:28.368 [2024-10-01 06:12:53.888057] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91595 ] 00:20:28.626 [2024-10-01 06:12:54.028025] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:28.626 [2024-10-01 06:12:54.068193] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:20:28.626 06:12:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:28.626 06:12:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # return 0 00:20:28.626 06:12:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:28.626 06:12:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:20:28.626 06:12:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:28.626 06:12:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:28.626 06:12:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:28.626 06:12:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:20:28.626 06:12:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:28.626 06:12:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:28.626 [2024-10-01 06:12:54.189442] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:28.626 06:12:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:28.626 06:12:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:20:28.626 06:12:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:28.626 06:12:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:30.004 [2024-10-01 06:12:55.222981] bdev_nvme.c:7162:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:20:30.004 [2024-10-01 06:12:55.223023] bdev_nvme.c:7242:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:20:30.004 [2024-10-01 06:12:55.223042] bdev_nvme.c:7125:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:20:30.004 [2024-10-01 06:12:55.229024] bdev_nvme.c:7091:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme0 00:20:30.004 [2024-10-01 06:12:55.285357] bdev_nvme.c:7952:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:20:30.004 [2024-10-01 06:12:55.285425] bdev_nvme.c:7952:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:20:30.004 [2024-10-01 06:12:55.285448] bdev_nvme.c:7952:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:20:30.004 [2024-10-01 06:12:55.285462] bdev_nvme.c:6981:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:20:30.004 [2024-10-01 06:12:55.285480] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:20:30.004 06:12:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:30.004 06:12:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:20:30.004 06:12:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:20:30.004 06:12:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:30.004 06:12:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:30.004 06:12:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:20:30.004 [2024-10-01 06:12:55.291934] bdev_nvme.c:1735:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x11366f0 was disconnected and freed. delete nvme_qpair. 00:20:30.004 06:12:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:30.004 06:12:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:20:30.004 06:12:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:20:30.004 06:12:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:30.004 06:12:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:20:30.004 06:12:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec nvmf_tgt_ns_spdk ip addr del 10.0.0.3/24 dev nvmf_tgt_if 00:20:30.004 06:12:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if down 00:20:30.004 06:12:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:20:30.004 06:12:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:20:30.004 06:12:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:30.004 06:12:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:30.004 06:12:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:30.004 06:12:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:20:30.004 06:12:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:20:30.004 06:12:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:20:30.004 06:12:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:30.004 06:12:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:20:30.004 06:12:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:20:30.941 06:12:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:20:30.941 06:12:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:20:30.941 06:12:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:30.941 06:12:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:20:30.941 06:12:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:20:30.941 06:12:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:30.941 06:12:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:30.941 06:12:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:30.941 06:12:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:20:30.941 06:12:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:20:31.921 06:12:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:20:31.921 06:12:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:31.921 06:12:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:20:31.921 06:12:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:20:31.921 06:12:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:31.921 06:12:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:31.921 06:12:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:20:31.921 06:12:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:32.181 06:12:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:20:32.181 06:12:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:20:33.116 06:12:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:20:33.116 06:12:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:33.116 06:12:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:20:33.116 06:12:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:33.116 06:12:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:33.116 06:12:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:20:33.116 06:12:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:20:33.116 06:12:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:33.116 06:12:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:20:33.116 06:12:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:20:34.050 06:12:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:20:34.050 06:12:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:34.050 06:12:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:20:34.050 06:12:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:34.050 06:12:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:34.050 06:12:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:20:34.050 06:12:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:20:34.050 06:12:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:34.050 06:12:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:20:34.050 06:12:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:20:35.425 06:13:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:20:35.425 06:13:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:35.425 06:13:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:20:35.425 06:13:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:35.425 06:13:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:20:35.425 06:13:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:35.425 06:13:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:20:35.425 06:13:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:35.425 06:13:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:20:35.425 06:13:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:20:35.425 [2024-10-01 06:13:00.714409] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:20:35.425 [2024-10-01 06:13:00.714494] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:35.425 [2024-10-01 06:13:00.714508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:35.425 [2024-10-01 06:13:00.714518] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:35.425 [2024-10-01 06:13:00.714526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:35.425 [2024-10-01 06:13:00.714536] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:35.425 [2024-10-01 06:13:00.714544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:35.425 [2024-10-01 06:13:00.714553] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:35.425 [2024-10-01 06:13:00.714561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:35.425 [2024-10-01 06:13:00.714569] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:20:35.425 [2024-10-01 06:13:00.714577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:35.425 [2024-10-01 06:13:00.714585] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1111c40 is same with the state(6) to be set 00:20:35.425 [2024-10-01 06:13:00.724408] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1111c40 (9): Bad file descriptor 00:20:35.425 [2024-10-01 06:13:00.734422] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:36.361 06:13:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:20:36.361 06:13:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:36.361 06:13:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:20:36.361 06:13:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:20:36.361 06:13:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.361 06:13:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:36.362 06:13:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:20:36.362 [2024-10-01 06:13:01.800004] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 110 00:20:36.362 [2024-10-01 06:13:01.800130] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1111c40 with addr=10.0.0.3, port=4420 00:20:36.362 [2024-10-01 06:13:01.800163] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1111c40 is same with the state(6) to be set 00:20:36.362 [2024-10-01 06:13:01.800241] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1111c40 (9): Bad file descriptor 00:20:36.362 [2024-10-01 06:13:01.801144] bdev_nvme.c:3029:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:36.362 [2024-10-01 06:13:01.801260] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:36.362 [2024-10-01 06:13:01.801284] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:36.362 [2024-10-01 06:13:01.801307] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:36.362 [2024-10-01 06:13:01.801369] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:36.362 [2024-10-01 06:13:01.801394] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:36.362 06:13:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.362 06:13:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:20:36.362 06:13:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:20:37.299 [2024-10-01 06:13:02.801439] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:20:37.299 [2024-10-01 06:13:02.801486] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:20:37.299 [2024-10-01 06:13:02.801511] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:20:37.299 [2024-10-01 06:13:02.801519] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:20:37.299 [2024-10-01 06:13:02.801536] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:37.299 [2024-10-01 06:13:02.801560] bdev_nvme.c:6913:remove_discovery_entry: *INFO*: Discovery[10.0.0.3:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 00:20:37.299 [2024-10-01 06:13:02.801591] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:37.299 [2024-10-01 06:13:02.801605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.299 [2024-10-01 06:13:02.801616] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:37.299 [2024-10-01 06:13:02.801624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.299 [2024-10-01 06:13:02.801633] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:37.299 [2024-10-01 06:13:02.801640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.299 [2024-10-01 06:13:02.801649] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:37.299 [2024-10-01 06:13:02.801656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.299 [2024-10-01 06:13:02.801665] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:20:37.299 [2024-10-01 06:13:02.801673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:37.299 [2024-10-01 06:13:02.801681] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:20:37.299 [2024-10-01 06:13:02.802093] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1100180 (9): Bad file descriptor 00:20:37.299 [2024-10-01 06:13:02.803104] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:20:37.299 [2024-10-01 06:13:02.803141] nvme_ctrlr.c:1213:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:20:37.299 06:13:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:20:37.299 06:13:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:37.299 06:13:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:37.299 06:13:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:37.299 06:13:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:20:37.299 06:13:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:20:37.299 06:13:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:20:37.299 06:13:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:37.299 06:13:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:20:37.299 06:13:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:20:37.299 06:13:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:37.299 06:13:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:20:37.299 06:13:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:20:37.299 06:13:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:37.299 06:13:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:37.299 06:13:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:20:37.299 06:13:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:20:37.299 06:13:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:37.299 06:13:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:20:37.558 06:13:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:37.558 06:13:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:20:37.558 06:13:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:20:38.494 06:13:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:20:38.494 06:13:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:38.494 06:13:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:20:38.494 06:13:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:38.494 06:13:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:20:38.494 06:13:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:38.494 06:13:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:20:38.494 06:13:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:38.494 06:13:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:20:38.494 06:13:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:20:39.430 [2024-10-01 06:13:04.806933] bdev_nvme.c:7162:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:20:39.430 [2024-10-01 06:13:04.806957] bdev_nvme.c:7242:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:20:39.430 [2024-10-01 06:13:04.806993] bdev_nvme.c:7125:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:20:39.430 [2024-10-01 06:13:04.812973] bdev_nvme.c:7091:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme1 00:20:39.430 [2024-10-01 06:13:04.868928] bdev_nvme.c:7952:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:20:39.430 [2024-10-01 06:13:04.868998] bdev_nvme.c:7952:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:20:39.430 [2024-10-01 06:13:04.869019] bdev_nvme.c:7952:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:20:39.430 [2024-10-01 06:13:04.869032] bdev_nvme.c:6981:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme1 done 00:20:39.430 [2024-10-01 06:13:04.869040] bdev_nvme.c:6940:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:20:39.430 [2024-10-01 06:13:04.875560] bdev_nvme.c:1735:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x1145af0 was disconnected and freed. delete nvme_qpair. 00:20:39.430 06:13:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:20:39.430 06:13:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:20:39.430 06:13:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:20:39.430 06:13:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:20:39.430 06:13:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:39.430 06:13:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:39.430 06:13:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:20:39.689 06:13:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:39.690 06:13:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:20:39.690 06:13:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:20:39.690 06:13:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 91595 00:20:39.690 06:13:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # '[' -z 91595 ']' 00:20:39.690 06:13:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # kill -0 91595 00:20:39.690 06:13:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # uname 00:20:39.690 06:13:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:39.690 06:13:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 91595 00:20:39.690 killing process with pid 91595 00:20:39.690 06:13:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:39.690 06:13:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:39.690 06:13:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 91595' 00:20:39.690 06:13:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@969 -- # kill 91595 00:20:39.690 06:13:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@974 -- # wait 91595 00:20:39.690 06:13:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:20:39.690 06:13:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # nvmfcleanup 00:20:39.690 06:13:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:20:39.949 06:13:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:39.949 06:13:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:20:39.949 06:13:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:39.949 06:13:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:39.949 rmmod nvme_tcp 00:20:39.949 rmmod nvme_fabrics 00:20:39.949 rmmod nvme_keyring 00:20:39.949 06:13:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:39.949 06:13:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:20:39.949 06:13:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:20:39.949 06:13:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@513 -- # '[' -n 91576 ']' 00:20:39.949 06:13:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@514 -- # killprocess 91576 00:20:39.949 06:13:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # '[' -z 91576 ']' 00:20:39.949 06:13:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # kill -0 91576 00:20:39.949 06:13:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # uname 00:20:39.949 06:13:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:39.949 06:13:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 91576 00:20:39.949 killing process with pid 91576 00:20:39.949 06:13:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:20:39.949 06:13:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:20:39.949 06:13:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 91576' 00:20:39.949 06:13:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@969 -- # kill 91576 00:20:39.949 06:13:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@974 -- # wait 91576 00:20:39.949 06:13:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:20:39.949 06:13:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:20:39.949 06:13:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:20:39.949 06:13:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:20:39.949 06:13:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@787 -- # iptables-save 00:20:39.949 06:13:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:20:39.949 06:13:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@787 -- # iptables-restore 00:20:39.949 06:13:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:39.949 06:13:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:20:39.949 06:13:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:20:39.949 06:13:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:20:40.208 06:13:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:20:40.208 06:13:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:20:40.208 06:13:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:20:40.208 06:13:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:20:40.208 06:13:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:20:40.208 06:13:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:20:40.208 06:13:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:20:40.208 06:13:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:20:40.208 06:13:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:20:40.208 06:13:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:40.208 06:13:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:40.208 06:13:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@246 -- # remove_spdk_ns 00:20:40.208 06:13:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:40.208 06:13:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:40.208 06:13:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:40.208 06:13:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@300 -- # return 0 00:20:40.208 00:20:40.208 real 0m12.968s 00:20:40.208 user 0m22.214s 00:20:40.208 sys 0m2.306s 00:20:40.208 06:13:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:40.208 06:13:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:20:40.208 ************************************ 00:20:40.208 END TEST nvmf_discovery_remove_ifc 00:20:40.208 ************************************ 00:20:40.208 06:13:05 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:20:40.208 06:13:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:20:40.208 06:13:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:40.208 06:13:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:40.468 ************************************ 00:20:40.468 START TEST nvmf_identify_kernel_target 00:20:40.468 ************************************ 00:20:40.468 06:13:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:20:40.468 * Looking for test storage... 00:20:40.468 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:40.468 06:13:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:20:40.468 06:13:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1681 -- # lcov --version 00:20:40.468 06:13:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:20:40.468 06:13:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:20:40.468 06:13:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:40.468 06:13:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:40.468 06:13:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:40.468 06:13:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:20:40.468 06:13:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:20:40.468 06:13:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:20:40.468 06:13:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:20:40.468 06:13:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:20:40.468 06:13:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:20:40.468 06:13:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:20:40.468 06:13:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:40.468 06:13:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:20:40.468 06:13:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:20:40.468 06:13:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:40.468 06:13:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:40.468 06:13:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:20:40.468 06:13:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:20:40.468 06:13:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:40.468 06:13:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:20:40.468 06:13:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:20:40.468 06:13:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:20:40.468 06:13:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:20:40.468 06:13:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:40.468 06:13:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:20:40.468 06:13:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:20:40.468 06:13:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:40.468 06:13:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:40.468 06:13:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:20:40.468 06:13:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:40.468 06:13:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:20:40.469 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:40.469 --rc genhtml_branch_coverage=1 00:20:40.469 --rc genhtml_function_coverage=1 00:20:40.469 --rc genhtml_legend=1 00:20:40.469 --rc geninfo_all_blocks=1 00:20:40.469 --rc geninfo_unexecuted_blocks=1 00:20:40.469 00:20:40.469 ' 00:20:40.469 06:13:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:20:40.469 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:40.469 --rc genhtml_branch_coverage=1 00:20:40.469 --rc genhtml_function_coverage=1 00:20:40.469 --rc genhtml_legend=1 00:20:40.469 --rc geninfo_all_blocks=1 00:20:40.469 --rc geninfo_unexecuted_blocks=1 00:20:40.469 00:20:40.469 ' 00:20:40.469 06:13:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:20:40.469 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:40.469 --rc genhtml_branch_coverage=1 00:20:40.469 --rc genhtml_function_coverage=1 00:20:40.469 --rc genhtml_legend=1 00:20:40.469 --rc geninfo_all_blocks=1 00:20:40.469 --rc geninfo_unexecuted_blocks=1 00:20:40.469 00:20:40.469 ' 00:20:40.469 06:13:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:20:40.469 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:40.469 --rc genhtml_branch_coverage=1 00:20:40.469 --rc genhtml_function_coverage=1 00:20:40.469 --rc genhtml_legend=1 00:20:40.469 --rc geninfo_all_blocks=1 00:20:40.469 --rc geninfo_unexecuted_blocks=1 00:20:40.469 00:20:40.469 ' 00:20:40.469 06:13:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:40.469 06:13:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:20:40.469 06:13:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:40.469 06:13:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:40.469 06:13:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:40.469 06:13:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:40.469 06:13:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:40.469 06:13:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:40.469 06:13:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:40.469 06:13:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:40.469 06:13:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:40.469 06:13:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:40.469 06:13:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 00:20:40.469 06:13:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=a979a798-a221-4879-b3c4-5aaa753fde06 00:20:40.469 06:13:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:40.469 06:13:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:40.469 06:13:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:40.469 06:13:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:40.469 06:13:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:40.469 06:13:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:20:40.469 06:13:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:40.469 06:13:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:40.469 06:13:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:40.469 06:13:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:40.469 06:13:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:40.469 06:13:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:40.469 06:13:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:20:40.469 06:13:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:40.469 06:13:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:20:40.469 06:13:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:40.469 06:13:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:40.469 06:13:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:40.469 06:13:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:40.469 06:13:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:40.469 06:13:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:40.469 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:40.469 06:13:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:40.469 06:13:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:40.469 06:13:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:40.469 06:13:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:20:40.469 06:13:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:20:40.469 06:13:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:40.469 06:13:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@472 -- # prepare_net_devs 00:20:40.469 06:13:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@434 -- # local -g is_hw=no 00:20:40.469 06:13:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@436 -- # remove_spdk_ns 00:20:40.469 06:13:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:40.469 06:13:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:40.469 06:13:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:40.469 06:13:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:20:40.469 06:13:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:20:40.469 06:13:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:20:40.469 06:13:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:20:40.469 06:13:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:20:40.469 06:13:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@456 -- # nvmf_veth_init 00:20:40.469 06:13:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:40.469 06:13:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:20:40.469 06:13:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:20:40.469 06:13:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:20:40.469 06:13:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:40.469 06:13:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:20:40.469 06:13:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:40.469 06:13:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:20:40.469 06:13:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:40.469 06:13:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:20:40.469 06:13:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:40.469 06:13:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:40.469 06:13:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:40.469 06:13:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:40.469 06:13:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:40.469 06:13:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:40.469 06:13:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:20:40.470 Cannot find device "nvmf_init_br" 00:20:40.470 06:13:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # true 00:20:40.470 06:13:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:20:40.470 Cannot find device "nvmf_init_br2" 00:20:40.470 06:13:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # true 00:20:40.470 06:13:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:20:40.729 Cannot find device "nvmf_tgt_br" 00:20:40.729 06:13:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@164 -- # true 00:20:40.729 06:13:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:20:40.729 Cannot find device "nvmf_tgt_br2" 00:20:40.729 06:13:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@165 -- # true 00:20:40.729 06:13:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:20:40.729 Cannot find device "nvmf_init_br" 00:20:40.729 06:13:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # true 00:20:40.729 06:13:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:20:40.729 Cannot find device "nvmf_init_br2" 00:20:40.729 06:13:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@167 -- # true 00:20:40.729 06:13:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:20:40.729 Cannot find device "nvmf_tgt_br" 00:20:40.729 06:13:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@168 -- # true 00:20:40.729 06:13:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:20:40.729 Cannot find device "nvmf_tgt_br2" 00:20:40.729 06:13:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # true 00:20:40.729 06:13:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:20:40.729 Cannot find device "nvmf_br" 00:20:40.729 06:13:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # true 00:20:40.729 06:13:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:20:40.729 Cannot find device "nvmf_init_if" 00:20:40.729 06:13:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # true 00:20:40.729 06:13:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:20:40.729 Cannot find device "nvmf_init_if2" 00:20:40.729 06:13:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@172 -- # true 00:20:40.729 06:13:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:40.729 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:40.729 06:13:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@173 -- # true 00:20:40.729 06:13:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:40.729 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:40.729 06:13:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@174 -- # true 00:20:40.729 06:13:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:20:40.729 06:13:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:40.729 06:13:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:20:40.729 06:13:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:40.729 06:13:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:40.729 06:13:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:40.729 06:13:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:40.729 06:13:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:40.729 06:13:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:20:40.729 06:13:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:20:40.729 06:13:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:20:40.729 06:13:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:20:40.729 06:13:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:20:40.729 06:13:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:20:40.729 06:13:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:20:40.729 06:13:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:20:40.729 06:13:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:20:40.729 06:13:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:40.729 06:13:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:40.729 06:13:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:40.729 06:13:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:20:40.729 06:13:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:20:40.729 06:13:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:20:40.987 06:13:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:20:40.987 06:13:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:40.987 06:13:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:40.987 06:13:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:40.987 06:13:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:20:40.987 06:13:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:20:40.987 06:13:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:20:40.987 06:13:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:40.987 06:13:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:20:40.987 06:13:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:20:40.987 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:40.987 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.087 ms 00:20:40.987 00:20:40.987 --- 10.0.0.3 ping statistics --- 00:20:40.987 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:40.987 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:20:40.987 06:13:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:20:40.987 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:20:40.987 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.043 ms 00:20:40.987 00:20:40.987 --- 10.0.0.4 ping statistics --- 00:20:40.987 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:40.987 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:20:40.987 06:13:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:40.987 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:40.987 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:20:40.987 00:20:40.987 --- 10.0.0.1 ping statistics --- 00:20:40.987 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:40.987 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:20:40.987 06:13:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:20:40.987 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:40.987 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.063 ms 00:20:40.987 00:20:40.987 --- 10.0.0.2 ping statistics --- 00:20:40.987 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:40.987 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:20:40.987 06:13:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:40.987 06:13:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@457 -- # return 0 00:20:40.987 06:13:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:20:40.987 06:13:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:40.987 06:13:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:20:40.987 06:13:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:20:40.987 06:13:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:40.987 06:13:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:20:40.987 06:13:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:20:40.987 06:13:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:20:40.987 06:13:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:20:40.987 06:13:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@765 -- # local ip 00:20:40.987 06:13:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:40.987 06:13:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:40.987 06:13:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:40.987 06:13:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:40.987 06:13:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:40.987 06:13:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:40.987 06:13:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:40.987 06:13:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:40.987 06:13:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:40.987 06:13:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:20:40.987 06:13:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:20:40.987 06:13:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:20:40.987 06:13:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # nvmet=/sys/kernel/config/nvmet 00:20:40.987 06:13:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:20:40.987 06:13:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:20:40.987 06:13:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@661 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:20:40.987 06:13:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # local block nvme 00:20:40.987 06:13:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # [[ ! -e /sys/module/nvmet ]] 00:20:40.987 06:13:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@666 -- # modprobe nvmet 00:20:40.987 06:13:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ -e /sys/kernel/config/nvmet ]] 00:20:40.987 06:13:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:20:41.246 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:41.246 Waiting for block devices as requested 00:20:41.505 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:20:41.505 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:20:41.505 06:13:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # for block in /sys/block/nvme* 00:20:41.505 06:13:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # [[ -e /sys/block/nvme0n1 ]] 00:20:41.505 06:13:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@676 -- # is_block_zoned nvme0n1 00:20:41.505 06:13:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:20:41.505 06:13:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:20:41.505 06:13:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:20:41.505 06:13:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # block_in_use nvme0n1 00:20:41.505 06:13:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:20:41.505 06:13:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:20:41.505 No valid GPT data, bailing 00:20:41.505 06:13:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:20:41.764 06:13:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:20:41.764 06:13:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:20:41.764 06:13:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # nvme=/dev/nvme0n1 00:20:41.764 06:13:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # for block in /sys/block/nvme* 00:20:41.764 06:13:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # [[ -e /sys/block/nvme0n2 ]] 00:20:41.764 06:13:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@676 -- # is_block_zoned nvme0n2 00:20:41.764 06:13:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1648 -- # local device=nvme0n2 00:20:41.764 06:13:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:20:41.764 06:13:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:20:41.764 06:13:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # block_in_use nvme0n2 00:20:41.764 06:13:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:20:41.764 06:13:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:20:41.764 No valid GPT data, bailing 00:20:41.764 06:13:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:20:41.764 06:13:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:20:41.764 06:13:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:20:41.764 06:13:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # nvme=/dev/nvme0n2 00:20:41.765 06:13:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # for block in /sys/block/nvme* 00:20:41.765 06:13:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # [[ -e /sys/block/nvme0n3 ]] 00:20:41.765 06:13:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@676 -- # is_block_zoned nvme0n3 00:20:41.765 06:13:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1648 -- # local device=nvme0n3 00:20:41.765 06:13:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:20:41.765 06:13:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:20:41.765 06:13:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # block_in_use nvme0n3 00:20:41.765 06:13:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:20:41.765 06:13:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:20:41.765 No valid GPT data, bailing 00:20:41.765 06:13:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:20:41.765 06:13:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:20:41.765 06:13:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:20:41.765 06:13:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # nvme=/dev/nvme0n3 00:20:41.765 06:13:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # for block in /sys/block/nvme* 00:20:41.765 06:13:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # [[ -e /sys/block/nvme1n1 ]] 00:20:41.765 06:13:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@676 -- # is_block_zoned nvme1n1 00:20:41.765 06:13:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:20:41.765 06:13:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:20:41.765 06:13:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:20:41.765 06:13:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # block_in_use nvme1n1 00:20:41.765 06:13:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:20:41.765 06:13:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:20:41.765 No valid GPT data, bailing 00:20:41.765 06:13:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:20:41.765 06:13:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:20:41.765 06:13:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:20:41.765 06:13:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # nvme=/dev/nvme1n1 00:20:41.765 06:13:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # [[ -b /dev/nvme1n1 ]] 00:20:41.765 06:13:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@682 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:20:41.765 06:13:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@683 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:20:41.765 06:13:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:20:41.765 06:13:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:20:41.765 06:13:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # echo 1 00:20:41.765 06:13:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@692 -- # echo /dev/nvme1n1 00:20:41.765 06:13:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo 1 00:20:41.765 06:13:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 10.0.0.1 00:20:41.765 06:13:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo tcp 00:20:41.765 06:13:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 4420 00:20:41.765 06:13:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # echo ipv4 00:20:41.765 06:13:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:20:42.024 06:13:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@704 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 --hostid=a979a798-a221-4879-b3c4-5aaa753fde06 -a 10.0.0.1 -t tcp -s 4420 00:20:42.024 00:20:42.024 Discovery Log Number of Records 2, Generation counter 2 00:20:42.024 =====Discovery Log Entry 0====== 00:20:42.024 trtype: tcp 00:20:42.024 adrfam: ipv4 00:20:42.024 subtype: current discovery subsystem 00:20:42.024 treq: not specified, sq flow control disable supported 00:20:42.024 portid: 1 00:20:42.024 trsvcid: 4420 00:20:42.024 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:20:42.024 traddr: 10.0.0.1 00:20:42.024 eflags: none 00:20:42.024 sectype: none 00:20:42.024 =====Discovery Log Entry 1====== 00:20:42.024 trtype: tcp 00:20:42.024 adrfam: ipv4 00:20:42.024 subtype: nvme subsystem 00:20:42.024 treq: not specified, sq flow control disable supported 00:20:42.024 portid: 1 00:20:42.024 trsvcid: 4420 00:20:42.024 subnqn: nqn.2016-06.io.spdk:testnqn 00:20:42.024 traddr: 10.0.0.1 00:20:42.024 eflags: none 00:20:42.024 sectype: none 00:20:42.024 06:13:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:20:42.024 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:20:42.024 ===================================================== 00:20:42.024 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:20:42.024 ===================================================== 00:20:42.024 Controller Capabilities/Features 00:20:42.024 ================================ 00:20:42.024 Vendor ID: 0000 00:20:42.024 Subsystem Vendor ID: 0000 00:20:42.024 Serial Number: 5d04abcab3839b39ce8c 00:20:42.024 Model Number: Linux 00:20:42.024 Firmware Version: 6.8.9-20 00:20:42.024 Recommended Arb Burst: 0 00:20:42.024 IEEE OUI Identifier: 00 00 00 00:20:42.024 Multi-path I/O 00:20:42.024 May have multiple subsystem ports: No 00:20:42.024 May have multiple controllers: No 00:20:42.024 Associated with SR-IOV VF: No 00:20:42.024 Max Data Transfer Size: Unlimited 00:20:42.024 Max Number of Namespaces: 0 00:20:42.024 Max Number of I/O Queues: 1024 00:20:42.024 NVMe Specification Version (VS): 1.3 00:20:42.024 NVMe Specification Version (Identify): 1.3 00:20:42.024 Maximum Queue Entries: 1024 00:20:42.024 Contiguous Queues Required: No 00:20:42.024 Arbitration Mechanisms Supported 00:20:42.024 Weighted Round Robin: Not Supported 00:20:42.024 Vendor Specific: Not Supported 00:20:42.024 Reset Timeout: 7500 ms 00:20:42.024 Doorbell Stride: 4 bytes 00:20:42.024 NVM Subsystem Reset: Not Supported 00:20:42.024 Command Sets Supported 00:20:42.024 NVM Command Set: Supported 00:20:42.024 Boot Partition: Not Supported 00:20:42.024 Memory Page Size Minimum: 4096 bytes 00:20:42.024 Memory Page Size Maximum: 4096 bytes 00:20:42.024 Persistent Memory Region: Not Supported 00:20:42.024 Optional Asynchronous Events Supported 00:20:42.024 Namespace Attribute Notices: Not Supported 00:20:42.024 Firmware Activation Notices: Not Supported 00:20:42.024 ANA Change Notices: Not Supported 00:20:42.024 PLE Aggregate Log Change Notices: Not Supported 00:20:42.024 LBA Status Info Alert Notices: Not Supported 00:20:42.024 EGE Aggregate Log Change Notices: Not Supported 00:20:42.024 Normal NVM Subsystem Shutdown event: Not Supported 00:20:42.024 Zone Descriptor Change Notices: Not Supported 00:20:42.024 Discovery Log Change Notices: Supported 00:20:42.024 Controller Attributes 00:20:42.024 128-bit Host Identifier: Not Supported 00:20:42.024 Non-Operational Permissive Mode: Not Supported 00:20:42.024 NVM Sets: Not Supported 00:20:42.024 Read Recovery Levels: Not Supported 00:20:42.024 Endurance Groups: Not Supported 00:20:42.024 Predictable Latency Mode: Not Supported 00:20:42.024 Traffic Based Keep ALive: Not Supported 00:20:42.024 Namespace Granularity: Not Supported 00:20:42.024 SQ Associations: Not Supported 00:20:42.024 UUID List: Not Supported 00:20:42.024 Multi-Domain Subsystem: Not Supported 00:20:42.024 Fixed Capacity Management: Not Supported 00:20:42.024 Variable Capacity Management: Not Supported 00:20:42.025 Delete Endurance Group: Not Supported 00:20:42.025 Delete NVM Set: Not Supported 00:20:42.025 Extended LBA Formats Supported: Not Supported 00:20:42.025 Flexible Data Placement Supported: Not Supported 00:20:42.025 00:20:42.025 Controller Memory Buffer Support 00:20:42.025 ================================ 00:20:42.025 Supported: No 00:20:42.025 00:20:42.025 Persistent Memory Region Support 00:20:42.025 ================================ 00:20:42.025 Supported: No 00:20:42.025 00:20:42.025 Admin Command Set Attributes 00:20:42.025 ============================ 00:20:42.025 Security Send/Receive: Not Supported 00:20:42.025 Format NVM: Not Supported 00:20:42.025 Firmware Activate/Download: Not Supported 00:20:42.025 Namespace Management: Not Supported 00:20:42.025 Device Self-Test: Not Supported 00:20:42.025 Directives: Not Supported 00:20:42.025 NVMe-MI: Not Supported 00:20:42.025 Virtualization Management: Not Supported 00:20:42.025 Doorbell Buffer Config: Not Supported 00:20:42.025 Get LBA Status Capability: Not Supported 00:20:42.025 Command & Feature Lockdown Capability: Not Supported 00:20:42.025 Abort Command Limit: 1 00:20:42.025 Async Event Request Limit: 1 00:20:42.025 Number of Firmware Slots: N/A 00:20:42.025 Firmware Slot 1 Read-Only: N/A 00:20:42.025 Firmware Activation Without Reset: N/A 00:20:42.025 Multiple Update Detection Support: N/A 00:20:42.025 Firmware Update Granularity: No Information Provided 00:20:42.025 Per-Namespace SMART Log: No 00:20:42.025 Asymmetric Namespace Access Log Page: Not Supported 00:20:42.025 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:20:42.025 Command Effects Log Page: Not Supported 00:20:42.025 Get Log Page Extended Data: Supported 00:20:42.025 Telemetry Log Pages: Not Supported 00:20:42.025 Persistent Event Log Pages: Not Supported 00:20:42.025 Supported Log Pages Log Page: May Support 00:20:42.025 Commands Supported & Effects Log Page: Not Supported 00:20:42.025 Feature Identifiers & Effects Log Page:May Support 00:20:42.025 NVMe-MI Commands & Effects Log Page: May Support 00:20:42.025 Data Area 4 for Telemetry Log: Not Supported 00:20:42.025 Error Log Page Entries Supported: 1 00:20:42.025 Keep Alive: Not Supported 00:20:42.025 00:20:42.025 NVM Command Set Attributes 00:20:42.025 ========================== 00:20:42.025 Submission Queue Entry Size 00:20:42.025 Max: 1 00:20:42.025 Min: 1 00:20:42.025 Completion Queue Entry Size 00:20:42.025 Max: 1 00:20:42.025 Min: 1 00:20:42.025 Number of Namespaces: 0 00:20:42.025 Compare Command: Not Supported 00:20:42.025 Write Uncorrectable Command: Not Supported 00:20:42.025 Dataset Management Command: Not Supported 00:20:42.025 Write Zeroes Command: Not Supported 00:20:42.025 Set Features Save Field: Not Supported 00:20:42.025 Reservations: Not Supported 00:20:42.025 Timestamp: Not Supported 00:20:42.025 Copy: Not Supported 00:20:42.025 Volatile Write Cache: Not Present 00:20:42.025 Atomic Write Unit (Normal): 1 00:20:42.025 Atomic Write Unit (PFail): 1 00:20:42.025 Atomic Compare & Write Unit: 1 00:20:42.025 Fused Compare & Write: Not Supported 00:20:42.025 Scatter-Gather List 00:20:42.025 SGL Command Set: Supported 00:20:42.025 SGL Keyed: Not Supported 00:20:42.025 SGL Bit Bucket Descriptor: Not Supported 00:20:42.025 SGL Metadata Pointer: Not Supported 00:20:42.025 Oversized SGL: Not Supported 00:20:42.025 SGL Metadata Address: Not Supported 00:20:42.025 SGL Offset: Supported 00:20:42.025 Transport SGL Data Block: Not Supported 00:20:42.025 Replay Protected Memory Block: Not Supported 00:20:42.025 00:20:42.025 Firmware Slot Information 00:20:42.025 ========================= 00:20:42.025 Active slot: 0 00:20:42.025 00:20:42.025 00:20:42.025 Error Log 00:20:42.025 ========= 00:20:42.025 00:20:42.025 Active Namespaces 00:20:42.025 ================= 00:20:42.025 Discovery Log Page 00:20:42.025 ================== 00:20:42.025 Generation Counter: 2 00:20:42.025 Number of Records: 2 00:20:42.025 Record Format: 0 00:20:42.025 00:20:42.025 Discovery Log Entry 0 00:20:42.025 ---------------------- 00:20:42.025 Transport Type: 3 (TCP) 00:20:42.025 Address Family: 1 (IPv4) 00:20:42.025 Subsystem Type: 3 (Current Discovery Subsystem) 00:20:42.025 Entry Flags: 00:20:42.025 Duplicate Returned Information: 0 00:20:42.025 Explicit Persistent Connection Support for Discovery: 0 00:20:42.025 Transport Requirements: 00:20:42.025 Secure Channel: Not Specified 00:20:42.025 Port ID: 1 (0x0001) 00:20:42.025 Controller ID: 65535 (0xffff) 00:20:42.025 Admin Max SQ Size: 32 00:20:42.025 Transport Service Identifier: 4420 00:20:42.025 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:20:42.025 Transport Address: 10.0.0.1 00:20:42.025 Discovery Log Entry 1 00:20:42.025 ---------------------- 00:20:42.025 Transport Type: 3 (TCP) 00:20:42.025 Address Family: 1 (IPv4) 00:20:42.025 Subsystem Type: 2 (NVM Subsystem) 00:20:42.025 Entry Flags: 00:20:42.025 Duplicate Returned Information: 0 00:20:42.025 Explicit Persistent Connection Support for Discovery: 0 00:20:42.025 Transport Requirements: 00:20:42.025 Secure Channel: Not Specified 00:20:42.025 Port ID: 1 (0x0001) 00:20:42.025 Controller ID: 65535 (0xffff) 00:20:42.025 Admin Max SQ Size: 32 00:20:42.025 Transport Service Identifier: 4420 00:20:42.025 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:20:42.025 Transport Address: 10.0.0.1 00:20:42.025 06:13:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:20:42.284 get_feature(0x01) failed 00:20:42.284 get_feature(0x02) failed 00:20:42.284 get_feature(0x04) failed 00:20:42.284 ===================================================== 00:20:42.284 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:20:42.284 ===================================================== 00:20:42.284 Controller Capabilities/Features 00:20:42.284 ================================ 00:20:42.284 Vendor ID: 0000 00:20:42.284 Subsystem Vendor ID: 0000 00:20:42.284 Serial Number: 2b458a11810def4cac6d 00:20:42.284 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:20:42.284 Firmware Version: 6.8.9-20 00:20:42.284 Recommended Arb Burst: 6 00:20:42.284 IEEE OUI Identifier: 00 00 00 00:20:42.284 Multi-path I/O 00:20:42.284 May have multiple subsystem ports: Yes 00:20:42.284 May have multiple controllers: Yes 00:20:42.284 Associated with SR-IOV VF: No 00:20:42.284 Max Data Transfer Size: Unlimited 00:20:42.284 Max Number of Namespaces: 1024 00:20:42.284 Max Number of I/O Queues: 128 00:20:42.284 NVMe Specification Version (VS): 1.3 00:20:42.284 NVMe Specification Version (Identify): 1.3 00:20:42.284 Maximum Queue Entries: 1024 00:20:42.284 Contiguous Queues Required: No 00:20:42.284 Arbitration Mechanisms Supported 00:20:42.284 Weighted Round Robin: Not Supported 00:20:42.284 Vendor Specific: Not Supported 00:20:42.284 Reset Timeout: 7500 ms 00:20:42.284 Doorbell Stride: 4 bytes 00:20:42.284 NVM Subsystem Reset: Not Supported 00:20:42.284 Command Sets Supported 00:20:42.284 NVM Command Set: Supported 00:20:42.284 Boot Partition: Not Supported 00:20:42.284 Memory Page Size Minimum: 4096 bytes 00:20:42.284 Memory Page Size Maximum: 4096 bytes 00:20:42.284 Persistent Memory Region: Not Supported 00:20:42.284 Optional Asynchronous Events Supported 00:20:42.284 Namespace Attribute Notices: Supported 00:20:42.284 Firmware Activation Notices: Not Supported 00:20:42.284 ANA Change Notices: Supported 00:20:42.284 PLE Aggregate Log Change Notices: Not Supported 00:20:42.284 LBA Status Info Alert Notices: Not Supported 00:20:42.284 EGE Aggregate Log Change Notices: Not Supported 00:20:42.284 Normal NVM Subsystem Shutdown event: Not Supported 00:20:42.284 Zone Descriptor Change Notices: Not Supported 00:20:42.284 Discovery Log Change Notices: Not Supported 00:20:42.284 Controller Attributes 00:20:42.284 128-bit Host Identifier: Supported 00:20:42.284 Non-Operational Permissive Mode: Not Supported 00:20:42.284 NVM Sets: Not Supported 00:20:42.284 Read Recovery Levels: Not Supported 00:20:42.284 Endurance Groups: Not Supported 00:20:42.284 Predictable Latency Mode: Not Supported 00:20:42.284 Traffic Based Keep ALive: Supported 00:20:42.284 Namespace Granularity: Not Supported 00:20:42.284 SQ Associations: Not Supported 00:20:42.284 UUID List: Not Supported 00:20:42.284 Multi-Domain Subsystem: Not Supported 00:20:42.284 Fixed Capacity Management: Not Supported 00:20:42.284 Variable Capacity Management: Not Supported 00:20:42.284 Delete Endurance Group: Not Supported 00:20:42.284 Delete NVM Set: Not Supported 00:20:42.284 Extended LBA Formats Supported: Not Supported 00:20:42.284 Flexible Data Placement Supported: Not Supported 00:20:42.284 00:20:42.284 Controller Memory Buffer Support 00:20:42.284 ================================ 00:20:42.284 Supported: No 00:20:42.284 00:20:42.284 Persistent Memory Region Support 00:20:42.284 ================================ 00:20:42.284 Supported: No 00:20:42.284 00:20:42.284 Admin Command Set Attributes 00:20:42.284 ============================ 00:20:42.284 Security Send/Receive: Not Supported 00:20:42.284 Format NVM: Not Supported 00:20:42.284 Firmware Activate/Download: Not Supported 00:20:42.284 Namespace Management: Not Supported 00:20:42.284 Device Self-Test: Not Supported 00:20:42.284 Directives: Not Supported 00:20:42.284 NVMe-MI: Not Supported 00:20:42.284 Virtualization Management: Not Supported 00:20:42.284 Doorbell Buffer Config: Not Supported 00:20:42.284 Get LBA Status Capability: Not Supported 00:20:42.284 Command & Feature Lockdown Capability: Not Supported 00:20:42.284 Abort Command Limit: 4 00:20:42.284 Async Event Request Limit: 4 00:20:42.284 Number of Firmware Slots: N/A 00:20:42.284 Firmware Slot 1 Read-Only: N/A 00:20:42.284 Firmware Activation Without Reset: N/A 00:20:42.284 Multiple Update Detection Support: N/A 00:20:42.284 Firmware Update Granularity: No Information Provided 00:20:42.284 Per-Namespace SMART Log: Yes 00:20:42.284 Asymmetric Namespace Access Log Page: Supported 00:20:42.284 ANA Transition Time : 10 sec 00:20:42.284 00:20:42.284 Asymmetric Namespace Access Capabilities 00:20:42.284 ANA Optimized State : Supported 00:20:42.284 ANA Non-Optimized State : Supported 00:20:42.284 ANA Inaccessible State : Supported 00:20:42.284 ANA Persistent Loss State : Supported 00:20:42.284 ANA Change State : Supported 00:20:42.284 ANAGRPID is not changed : No 00:20:42.284 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:20:42.284 00:20:42.284 ANA Group Identifier Maximum : 128 00:20:42.284 Number of ANA Group Identifiers : 128 00:20:42.284 Max Number of Allowed Namespaces : 1024 00:20:42.284 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:20:42.284 Command Effects Log Page: Supported 00:20:42.284 Get Log Page Extended Data: Supported 00:20:42.284 Telemetry Log Pages: Not Supported 00:20:42.284 Persistent Event Log Pages: Not Supported 00:20:42.284 Supported Log Pages Log Page: May Support 00:20:42.284 Commands Supported & Effects Log Page: Not Supported 00:20:42.284 Feature Identifiers & Effects Log Page:May Support 00:20:42.284 NVMe-MI Commands & Effects Log Page: May Support 00:20:42.284 Data Area 4 for Telemetry Log: Not Supported 00:20:42.284 Error Log Page Entries Supported: 128 00:20:42.284 Keep Alive: Supported 00:20:42.284 Keep Alive Granularity: 1000 ms 00:20:42.284 00:20:42.284 NVM Command Set Attributes 00:20:42.284 ========================== 00:20:42.284 Submission Queue Entry Size 00:20:42.284 Max: 64 00:20:42.284 Min: 64 00:20:42.284 Completion Queue Entry Size 00:20:42.284 Max: 16 00:20:42.284 Min: 16 00:20:42.284 Number of Namespaces: 1024 00:20:42.284 Compare Command: Not Supported 00:20:42.284 Write Uncorrectable Command: Not Supported 00:20:42.284 Dataset Management Command: Supported 00:20:42.284 Write Zeroes Command: Supported 00:20:42.284 Set Features Save Field: Not Supported 00:20:42.284 Reservations: Not Supported 00:20:42.284 Timestamp: Not Supported 00:20:42.284 Copy: Not Supported 00:20:42.284 Volatile Write Cache: Present 00:20:42.284 Atomic Write Unit (Normal): 1 00:20:42.284 Atomic Write Unit (PFail): 1 00:20:42.284 Atomic Compare & Write Unit: 1 00:20:42.284 Fused Compare & Write: Not Supported 00:20:42.284 Scatter-Gather List 00:20:42.284 SGL Command Set: Supported 00:20:42.284 SGL Keyed: Not Supported 00:20:42.284 SGL Bit Bucket Descriptor: Not Supported 00:20:42.284 SGL Metadata Pointer: Not Supported 00:20:42.284 Oversized SGL: Not Supported 00:20:42.284 SGL Metadata Address: Not Supported 00:20:42.284 SGL Offset: Supported 00:20:42.284 Transport SGL Data Block: Not Supported 00:20:42.284 Replay Protected Memory Block: Not Supported 00:20:42.284 00:20:42.284 Firmware Slot Information 00:20:42.284 ========================= 00:20:42.284 Active slot: 0 00:20:42.284 00:20:42.284 Asymmetric Namespace Access 00:20:42.284 =========================== 00:20:42.284 Change Count : 0 00:20:42.284 Number of ANA Group Descriptors : 1 00:20:42.284 ANA Group Descriptor : 0 00:20:42.284 ANA Group ID : 1 00:20:42.284 Number of NSID Values : 1 00:20:42.284 Change Count : 0 00:20:42.284 ANA State : 1 00:20:42.284 Namespace Identifier : 1 00:20:42.284 00:20:42.284 Commands Supported and Effects 00:20:42.284 ============================== 00:20:42.284 Admin Commands 00:20:42.284 -------------- 00:20:42.284 Get Log Page (02h): Supported 00:20:42.284 Identify (06h): Supported 00:20:42.284 Abort (08h): Supported 00:20:42.284 Set Features (09h): Supported 00:20:42.284 Get Features (0Ah): Supported 00:20:42.284 Asynchronous Event Request (0Ch): Supported 00:20:42.284 Keep Alive (18h): Supported 00:20:42.284 I/O Commands 00:20:42.284 ------------ 00:20:42.284 Flush (00h): Supported 00:20:42.284 Write (01h): Supported LBA-Change 00:20:42.284 Read (02h): Supported 00:20:42.284 Write Zeroes (08h): Supported LBA-Change 00:20:42.284 Dataset Management (09h): Supported 00:20:42.284 00:20:42.284 Error Log 00:20:42.284 ========= 00:20:42.284 Entry: 0 00:20:42.284 Error Count: 0x3 00:20:42.284 Submission Queue Id: 0x0 00:20:42.284 Command Id: 0x5 00:20:42.284 Phase Bit: 0 00:20:42.284 Status Code: 0x2 00:20:42.284 Status Code Type: 0x0 00:20:42.284 Do Not Retry: 1 00:20:42.284 Error Location: 0x28 00:20:42.284 LBA: 0x0 00:20:42.284 Namespace: 0x0 00:20:42.284 Vendor Log Page: 0x0 00:20:42.284 ----------- 00:20:42.284 Entry: 1 00:20:42.284 Error Count: 0x2 00:20:42.284 Submission Queue Id: 0x0 00:20:42.284 Command Id: 0x5 00:20:42.284 Phase Bit: 0 00:20:42.284 Status Code: 0x2 00:20:42.284 Status Code Type: 0x0 00:20:42.284 Do Not Retry: 1 00:20:42.284 Error Location: 0x28 00:20:42.284 LBA: 0x0 00:20:42.284 Namespace: 0x0 00:20:42.284 Vendor Log Page: 0x0 00:20:42.284 ----------- 00:20:42.285 Entry: 2 00:20:42.285 Error Count: 0x1 00:20:42.285 Submission Queue Id: 0x0 00:20:42.285 Command Id: 0x4 00:20:42.285 Phase Bit: 0 00:20:42.285 Status Code: 0x2 00:20:42.285 Status Code Type: 0x0 00:20:42.285 Do Not Retry: 1 00:20:42.285 Error Location: 0x28 00:20:42.285 LBA: 0x0 00:20:42.285 Namespace: 0x0 00:20:42.285 Vendor Log Page: 0x0 00:20:42.285 00:20:42.285 Number of Queues 00:20:42.285 ================ 00:20:42.285 Number of I/O Submission Queues: 128 00:20:42.285 Number of I/O Completion Queues: 128 00:20:42.285 00:20:42.285 ZNS Specific Controller Data 00:20:42.285 ============================ 00:20:42.285 Zone Append Size Limit: 0 00:20:42.285 00:20:42.285 00:20:42.285 Active Namespaces 00:20:42.285 ================= 00:20:42.285 get_feature(0x05) failed 00:20:42.285 Namespace ID:1 00:20:42.285 Command Set Identifier: NVM (00h) 00:20:42.285 Deallocate: Supported 00:20:42.285 Deallocated/Unwritten Error: Not Supported 00:20:42.285 Deallocated Read Value: Unknown 00:20:42.285 Deallocate in Write Zeroes: Not Supported 00:20:42.285 Deallocated Guard Field: 0xFFFF 00:20:42.285 Flush: Supported 00:20:42.285 Reservation: Not Supported 00:20:42.285 Namespace Sharing Capabilities: Multiple Controllers 00:20:42.285 Size (in LBAs): 1310720 (5GiB) 00:20:42.285 Capacity (in LBAs): 1310720 (5GiB) 00:20:42.285 Utilization (in LBAs): 1310720 (5GiB) 00:20:42.285 UUID: 89051246-c21b-4e04-95a5-77dcb48ad38b 00:20:42.285 Thin Provisioning: Not Supported 00:20:42.285 Per-NS Atomic Units: Yes 00:20:42.285 Atomic Boundary Size (Normal): 0 00:20:42.285 Atomic Boundary Size (PFail): 0 00:20:42.285 Atomic Boundary Offset: 0 00:20:42.285 NGUID/EUI64 Never Reused: No 00:20:42.285 ANA group ID: 1 00:20:42.285 Namespace Write Protected: No 00:20:42.285 Number of LBA Formats: 1 00:20:42.285 Current LBA Format: LBA Format #00 00:20:42.285 LBA Format #00: Data Size: 4096 Metadata Size: 0 00:20:42.285 00:20:42.285 06:13:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:20:42.285 06:13:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@512 -- # nvmfcleanup 00:20:42.285 06:13:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:20:42.285 06:13:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:42.285 06:13:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:20:42.285 06:13:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:42.285 06:13:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:42.285 rmmod nvme_tcp 00:20:42.285 rmmod nvme_fabrics 00:20:42.285 06:13:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:42.285 06:13:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:20:42.285 06:13:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:20:42.285 06:13:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@513 -- # '[' -n '' ']' 00:20:42.285 06:13:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:20:42.285 06:13:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:20:42.285 06:13:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:20:42.285 06:13:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:20:42.285 06:13:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@787 -- # iptables-save 00:20:42.285 06:13:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:20:42.285 06:13:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@787 -- # iptables-restore 00:20:42.285 06:13:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:42.285 06:13:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:20:42.285 06:13:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:20:42.285 06:13:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:20:42.544 06:13:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:20:42.544 06:13:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:20:42.544 06:13:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:20:42.544 06:13:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:20:42.544 06:13:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:20:42.544 06:13:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:20:42.544 06:13:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:20:42.544 06:13:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:20:42.544 06:13:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:20:42.544 06:13:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:42.544 06:13:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:42.544 06:13:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:20:42.544 06:13:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:42.544 06:13:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:42.544 06:13:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:42.544 06:13:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@300 -- # return 0 00:20:42.544 06:13:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:20:42.544 06:13:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:20:42.544 06:13:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@710 -- # echo 0 00:20:42.544 06:13:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:20:42.544 06:13:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@713 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:20:42.544 06:13:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:20:42.544 06:13:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@715 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:20:42.544 06:13:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # modules=(/sys/module/nvmet/holders/*) 00:20:42.544 06:13:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # modprobe -r nvmet_tcp nvmet 00:20:42.803 06:13:08 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@722 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:20:43.370 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:43.370 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:20:43.635 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:20:43.635 00:20:43.635 real 0m3.245s 00:20:43.635 user 0m1.146s 00:20:43.635 sys 0m1.488s 00:20:43.635 06:13:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:43.635 06:13:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.635 ************************************ 00:20:43.635 END TEST nvmf_identify_kernel_target 00:20:43.635 ************************************ 00:20:43.635 06:13:09 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:20:43.635 06:13:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:20:43.635 06:13:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:43.635 06:13:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:43.635 ************************************ 00:20:43.635 START TEST nvmf_auth_host 00:20:43.635 ************************************ 00:20:43.635 06:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:20:43.635 * Looking for test storage... 00:20:43.635 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:43.635 06:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:20:43.635 06:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1681 -- # lcov --version 00:20:43.635 06:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:20:43.896 06:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:20:43.896 06:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:43.896 06:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:43.896 06:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:43.896 06:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:20:43.896 06:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:20:43.896 06:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:20:43.896 06:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:20:43.896 06:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:20:43.896 06:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:20:43.896 06:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:20:43.896 06:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:43.896 06:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:20:43.896 06:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:20:43.896 06:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:43.896 06:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:43.896 06:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:20:43.896 06:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:20:43.896 06:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:43.896 06:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:20:43.896 06:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:20:43.896 06:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:20:43.896 06:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:20:43.896 06:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:43.896 06:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:20:43.896 06:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:20:43.896 06:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:43.896 06:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:43.896 06:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:20:43.896 06:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:43.896 06:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:20:43.896 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:43.896 --rc genhtml_branch_coverage=1 00:20:43.896 --rc genhtml_function_coverage=1 00:20:43.896 --rc genhtml_legend=1 00:20:43.896 --rc geninfo_all_blocks=1 00:20:43.896 --rc geninfo_unexecuted_blocks=1 00:20:43.896 00:20:43.896 ' 00:20:43.896 06:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:20:43.896 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:43.896 --rc genhtml_branch_coverage=1 00:20:43.896 --rc genhtml_function_coverage=1 00:20:43.896 --rc genhtml_legend=1 00:20:43.896 --rc geninfo_all_blocks=1 00:20:43.896 --rc geninfo_unexecuted_blocks=1 00:20:43.896 00:20:43.896 ' 00:20:43.896 06:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:20:43.896 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:43.896 --rc genhtml_branch_coverage=1 00:20:43.896 --rc genhtml_function_coverage=1 00:20:43.896 --rc genhtml_legend=1 00:20:43.896 --rc geninfo_all_blocks=1 00:20:43.896 --rc geninfo_unexecuted_blocks=1 00:20:43.896 00:20:43.896 ' 00:20:43.896 06:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:20:43.896 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:43.896 --rc genhtml_branch_coverage=1 00:20:43.896 --rc genhtml_function_coverage=1 00:20:43.896 --rc genhtml_legend=1 00:20:43.896 --rc geninfo_all_blocks=1 00:20:43.896 --rc geninfo_unexecuted_blocks=1 00:20:43.896 00:20:43.896 ' 00:20:43.896 06:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:43.896 06:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:20:43.896 06:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:43.896 06:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:43.896 06:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:43.896 06:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:43.896 06:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:43.896 06:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:43.896 06:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:43.896 06:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:43.896 06:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:43.896 06:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:43.896 06:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 00:20:43.896 06:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=a979a798-a221-4879-b3c4-5aaa753fde06 00:20:43.896 06:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:43.896 06:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:43.896 06:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:43.896 06:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:43.896 06:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:43.896 06:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:20:43.896 06:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:43.897 06:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:43.897 06:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:43.897 06:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:43.897 06:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:43.897 06:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:43.897 06:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:20:43.897 06:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:43.897 06:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:20:43.897 06:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:43.897 06:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:43.897 06:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:43.897 06:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:43.897 06:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:43.897 06:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:43.897 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:43.897 06:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:43.897 06:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:43.897 06:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:43.897 06:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:20:43.897 06:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:20:43.897 06:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:20:43.897 06:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:20:43.897 06:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:20:43.897 06:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:20:43.897 06:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:20:43.897 06:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:20:43.897 06:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:20:43.897 06:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:20:43.897 06:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:43.897 06:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@472 -- # prepare_net_devs 00:20:43.897 06:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@434 -- # local -g is_hw=no 00:20:43.897 06:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@436 -- # remove_spdk_ns 00:20:43.897 06:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:43.897 06:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:43.897 06:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:43.897 06:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:20:43.897 06:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:20:43.897 06:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:20:43.897 06:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:20:43.897 06:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:20:43.897 06:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@456 -- # nvmf_veth_init 00:20:43.897 06:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:43.897 06:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:20:43.897 06:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:20:43.897 06:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:20:43.897 06:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:43.897 06:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:20:43.897 06:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:43.897 06:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:20:43.897 06:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:43.897 06:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:20:43.897 06:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:43.897 06:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:43.897 06:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:43.897 06:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:43.897 06:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:43.897 06:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:43.897 06:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:20:43.897 Cannot find device "nvmf_init_br" 00:20:43.897 06:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@162 -- # true 00:20:43.897 06:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:20:43.897 Cannot find device "nvmf_init_br2" 00:20:43.897 06:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@163 -- # true 00:20:43.897 06:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:20:43.897 Cannot find device "nvmf_tgt_br" 00:20:43.897 06:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@164 -- # true 00:20:43.897 06:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:20:43.897 Cannot find device "nvmf_tgt_br2" 00:20:43.897 06:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@165 -- # true 00:20:43.897 06:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:20:43.897 Cannot find device "nvmf_init_br" 00:20:43.897 06:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@166 -- # true 00:20:43.897 06:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:20:43.897 Cannot find device "nvmf_init_br2" 00:20:43.897 06:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@167 -- # true 00:20:43.897 06:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:20:43.897 Cannot find device "nvmf_tgt_br" 00:20:43.897 06:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@168 -- # true 00:20:43.897 06:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:20:43.897 Cannot find device "nvmf_tgt_br2" 00:20:43.897 06:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@169 -- # true 00:20:43.897 06:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:20:43.897 Cannot find device "nvmf_br" 00:20:43.897 06:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@170 -- # true 00:20:43.897 06:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:20:43.897 Cannot find device "nvmf_init_if" 00:20:43.897 06:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@171 -- # true 00:20:43.897 06:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:20:43.897 Cannot find device "nvmf_init_if2" 00:20:43.897 06:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@172 -- # true 00:20:43.897 06:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:43.897 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:43.897 06:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@173 -- # true 00:20:43.897 06:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:43.897 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:43.897 06:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@174 -- # true 00:20:43.897 06:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:20:43.897 06:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:43.898 06:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:20:44.156 06:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:44.156 06:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:44.156 06:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:44.156 06:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:44.156 06:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:44.156 06:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:20:44.156 06:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:20:44.156 06:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:20:44.156 06:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:20:44.156 06:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:20:44.156 06:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:20:44.156 06:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:20:44.156 06:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:20:44.156 06:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:20:44.156 06:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:44.156 06:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:44.156 06:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:44.156 06:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:20:44.156 06:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:20:44.156 06:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:20:44.156 06:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:20:44.156 06:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:44.156 06:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:44.156 06:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:44.156 06:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:20:44.157 06:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:20:44.157 06:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:20:44.157 06:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:44.157 06:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:20:44.157 06:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:20:44.157 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:44.157 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.056 ms 00:20:44.157 00:20:44.157 --- 10.0.0.3 ping statistics --- 00:20:44.157 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:44.157 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:20:44.157 06:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:20:44.157 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:20:44.157 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.060 ms 00:20:44.157 00:20:44.157 --- 10.0.0.4 ping statistics --- 00:20:44.157 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:44.157 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:20:44.157 06:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:44.157 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:44.157 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:20:44.157 00:20:44.157 --- 10.0.0.1 ping statistics --- 00:20:44.157 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:44.157 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:20:44.157 06:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:20:44.157 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:44.157 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.048 ms 00:20:44.157 00:20:44.157 --- 10.0.0.2 ping statistics --- 00:20:44.157 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:44.157 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:20:44.157 06:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:44.157 06:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@457 -- # return 0 00:20:44.157 06:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:20:44.157 06:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:44.157 06:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:20:44.157 06:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:20:44.157 06:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:44.157 06:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:20:44.157 06:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:20:44.416 06:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:20:44.416 06:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:20:44.416 06:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:44.416 06:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:44.416 06:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@505 -- # nvmfpid=92581 00:20:44.416 06:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:20:44.416 06:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # waitforlisten 92581 00:20:44.416 06:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@831 -- # '[' -z 92581 ']' 00:20:44.416 06:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:44.416 06:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:44.416 06:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:44.416 06:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:44.416 06:13:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:44.675 06:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:44.675 06:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # return 0 00:20:44.675 06:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:20:44.675 06:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:44.675 06:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:44.675 06:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:44.675 06:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:20:44.675 06:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:20:44.675 06:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # local digest len file key 00:20:44.675 06:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:44.675 06:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A digests 00:20:44.675 06:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digest=null 00:20:44.675 06:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # len=32 00:20:44.675 06:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # xxd -p -c0 -l 16 /dev/urandom 00:20:44.675 06:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # key=71c4e941c43519e957dd2af114d6b6cb 00:20:44.675 06:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # mktemp -t spdk.key-null.XXX 00:20:44.675 06:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-null.gyS 00:20:44.675 06:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # format_dhchap_key 71c4e941c43519e957dd2af114d6b6cb 0 00:20:44.675 06:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@743 -- # format_key DHHC-1 71c4e941c43519e957dd2af114d6b6cb 0 00:20:44.675 06:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # local prefix key digest 00:20:44.675 06:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:20:44.675 06:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # key=71c4e941c43519e957dd2af114d6b6cb 00:20:44.675 06:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # digest=0 00:20:44.675 06:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # python - 00:20:44.675 06:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-null.gyS 00:20:44.675 06:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-null.gyS 00:20:44.675 06:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.gyS 00:20:44.675 06:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:20:44.675 06:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # local digest len file key 00:20:44.675 06:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:44.675 06:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A digests 00:20:44.675 06:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digest=sha512 00:20:44.675 06:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # len=64 00:20:44.675 06:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # xxd -p -c0 -l 32 /dev/urandom 00:20:44.675 06:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # key=e9bb60dcd93473c82309b9866b3740b4d64f7a8d0c6ee74231a8e3405d55fa45 00:20:44.675 06:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha512.XXX 00:20:44.675 06:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha512.yYk 00:20:44.675 06:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # format_dhchap_key e9bb60dcd93473c82309b9866b3740b4d64f7a8d0c6ee74231a8e3405d55fa45 3 00:20:44.675 06:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@743 -- # format_key DHHC-1 e9bb60dcd93473c82309b9866b3740b4d64f7a8d0c6ee74231a8e3405d55fa45 3 00:20:44.675 06:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # local prefix key digest 00:20:44.676 06:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:20:44.676 06:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # key=e9bb60dcd93473c82309b9866b3740b4d64f7a8d0c6ee74231a8e3405d55fa45 00:20:44.676 06:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # digest=3 00:20:44.676 06:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # python - 00:20:44.676 06:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha512.yYk 00:20:44.676 06:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha512.yYk 00:20:44.676 06:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.yYk 00:20:44.676 06:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:20:44.676 06:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # local digest len file key 00:20:44.676 06:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:44.676 06:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A digests 00:20:44.676 06:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digest=null 00:20:44.676 06:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # len=48 00:20:44.676 06:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # xxd -p -c0 -l 24 /dev/urandom 00:20:44.676 06:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # key=b99da002053bb445bc103b558c62bd2adb57220d8ef52db8 00:20:44.935 06:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # mktemp -t spdk.key-null.XXX 00:20:44.935 06:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-null.pif 00:20:44.935 06:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # format_dhchap_key b99da002053bb445bc103b558c62bd2adb57220d8ef52db8 0 00:20:44.935 06:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@743 -- # format_key DHHC-1 b99da002053bb445bc103b558c62bd2adb57220d8ef52db8 0 00:20:44.935 06:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # local prefix key digest 00:20:44.935 06:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:20:44.935 06:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # key=b99da002053bb445bc103b558c62bd2adb57220d8ef52db8 00:20:44.935 06:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # digest=0 00:20:44.935 06:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # python - 00:20:44.935 06:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-null.pif 00:20:44.935 06:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-null.pif 00:20:44.935 06:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.pif 00:20:44.935 06:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:20:44.935 06:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # local digest len file key 00:20:44.935 06:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:44.935 06:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A digests 00:20:44.935 06:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digest=sha384 00:20:44.935 06:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # len=48 00:20:44.935 06:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # xxd -p -c0 -l 24 /dev/urandom 00:20:44.935 06:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # key=57af6ba2d6298bc8efce1097673ef809a970f4f842007775 00:20:44.935 06:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha384.XXX 00:20:44.935 06:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha384.8F9 00:20:44.935 06:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # format_dhchap_key 57af6ba2d6298bc8efce1097673ef809a970f4f842007775 2 00:20:44.935 06:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@743 -- # format_key DHHC-1 57af6ba2d6298bc8efce1097673ef809a970f4f842007775 2 00:20:44.935 06:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # local prefix key digest 00:20:44.935 06:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:20:44.935 06:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # key=57af6ba2d6298bc8efce1097673ef809a970f4f842007775 00:20:44.935 06:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # digest=2 00:20:44.935 06:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # python - 00:20:44.935 06:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha384.8F9 00:20:44.935 06:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha384.8F9 00:20:44.935 06:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.8F9 00:20:44.935 06:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:20:44.935 06:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # local digest len file key 00:20:44.935 06:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:44.935 06:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A digests 00:20:44.935 06:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digest=sha256 00:20:44.935 06:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # len=32 00:20:44.935 06:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # xxd -p -c0 -l 16 /dev/urandom 00:20:44.935 06:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # key=581befc17f1e41674c27c0ee971148a6 00:20:44.935 06:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha256.XXX 00:20:44.935 06:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha256.kKL 00:20:44.935 06:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # format_dhchap_key 581befc17f1e41674c27c0ee971148a6 1 00:20:44.935 06:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@743 -- # format_key DHHC-1 581befc17f1e41674c27c0ee971148a6 1 00:20:44.935 06:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # local prefix key digest 00:20:44.935 06:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:20:44.935 06:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # key=581befc17f1e41674c27c0ee971148a6 00:20:44.935 06:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # digest=1 00:20:44.935 06:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # python - 00:20:44.935 06:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha256.kKL 00:20:44.935 06:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha256.kKL 00:20:44.935 06:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.kKL 00:20:44.935 06:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:20:44.935 06:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # local digest len file key 00:20:44.935 06:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:44.935 06:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A digests 00:20:44.935 06:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digest=sha256 00:20:44.935 06:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # len=32 00:20:44.935 06:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # xxd -p -c0 -l 16 /dev/urandom 00:20:44.935 06:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # key=ad00bb38bf6771bf3fca73574274a86b 00:20:44.935 06:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha256.XXX 00:20:44.935 06:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha256.tZe 00:20:44.935 06:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # format_dhchap_key ad00bb38bf6771bf3fca73574274a86b 1 00:20:44.935 06:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@743 -- # format_key DHHC-1 ad00bb38bf6771bf3fca73574274a86b 1 00:20:44.935 06:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # local prefix key digest 00:20:44.935 06:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:20:44.935 06:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # key=ad00bb38bf6771bf3fca73574274a86b 00:20:44.935 06:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # digest=1 00:20:44.935 06:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # python - 00:20:45.194 06:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha256.tZe 00:20:45.194 06:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha256.tZe 00:20:45.194 06:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.tZe 00:20:45.194 06:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:20:45.194 06:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # local digest len file key 00:20:45.194 06:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:45.194 06:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A digests 00:20:45.194 06:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digest=sha384 00:20:45.194 06:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # len=48 00:20:45.194 06:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # xxd -p -c0 -l 24 /dev/urandom 00:20:45.194 06:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # key=5c154b7de74e957d03ba49fbcaf507f87b6d1d775c20d8b3 00:20:45.194 06:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha384.XXX 00:20:45.194 06:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha384.fjH 00:20:45.194 06:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # format_dhchap_key 5c154b7de74e957d03ba49fbcaf507f87b6d1d775c20d8b3 2 00:20:45.194 06:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@743 -- # format_key DHHC-1 5c154b7de74e957d03ba49fbcaf507f87b6d1d775c20d8b3 2 00:20:45.194 06:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # local prefix key digest 00:20:45.194 06:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:20:45.194 06:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # key=5c154b7de74e957d03ba49fbcaf507f87b6d1d775c20d8b3 00:20:45.194 06:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # digest=2 00:20:45.194 06:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # python - 00:20:45.194 06:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha384.fjH 00:20:45.194 06:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha384.fjH 00:20:45.195 06:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.fjH 00:20:45.195 06:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:20:45.195 06:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # local digest len file key 00:20:45.195 06:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:45.195 06:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A digests 00:20:45.195 06:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digest=null 00:20:45.195 06:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # len=32 00:20:45.195 06:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # xxd -p -c0 -l 16 /dev/urandom 00:20:45.195 06:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # key=7168eed28f221f630e3d3a3276d1cc34 00:20:45.195 06:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # mktemp -t spdk.key-null.XXX 00:20:45.195 06:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-null.mob 00:20:45.195 06:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # format_dhchap_key 7168eed28f221f630e3d3a3276d1cc34 0 00:20:45.195 06:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@743 -- # format_key DHHC-1 7168eed28f221f630e3d3a3276d1cc34 0 00:20:45.195 06:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # local prefix key digest 00:20:45.195 06:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:20:45.195 06:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # key=7168eed28f221f630e3d3a3276d1cc34 00:20:45.195 06:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # digest=0 00:20:45.195 06:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # python - 00:20:45.195 06:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-null.mob 00:20:45.195 06:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-null.mob 00:20:45.195 06:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.mob 00:20:45.195 06:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:20:45.195 06:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # local digest len file key 00:20:45.195 06:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:45.195 06:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # local -A digests 00:20:45.195 06:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # digest=sha512 00:20:45.195 06:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # len=64 00:20:45.195 06:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # xxd -p -c0 -l 32 /dev/urandom 00:20:45.195 06:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # key=c9d95d4b6f02ca16aec0deb24ae379df09a87fe58c5112c6eec49dc41fdc5456 00:20:45.195 06:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # mktemp -t spdk.key-sha512.XXX 00:20:45.195 06:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # file=/tmp/spdk.key-sha512.DvX 00:20:45.195 06:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@753 -- # format_dhchap_key c9d95d4b6f02ca16aec0deb24ae379df09a87fe58c5112c6eec49dc41fdc5456 3 00:20:45.195 06:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@743 -- # format_key DHHC-1 c9d95d4b6f02ca16aec0deb24ae379df09a87fe58c5112c6eec49dc41fdc5456 3 00:20:45.195 06:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # local prefix key digest 00:20:45.195 06:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # prefix=DHHC-1 00:20:45.195 06:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # key=c9d95d4b6f02ca16aec0deb24ae379df09a87fe58c5112c6eec49dc41fdc5456 00:20:45.195 06:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # digest=3 00:20:45.195 06:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # python - 00:20:45.195 06:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # chmod 0600 /tmp/spdk.key-sha512.DvX 00:20:45.195 06:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # echo /tmp/spdk.key-sha512.DvX 00:20:45.195 06:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.DvX 00:20:45.195 06:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:20:45.195 06:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 92581 00:20:45.195 06:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@831 -- # '[' -z 92581 ']' 00:20:45.195 06:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:45.195 06:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:45.195 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:45.195 06:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:45.195 06:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:45.195 06:13:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:45.453 06:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:45.453 06:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # return 0 00:20:45.453 06:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:20:45.453 06:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.gyS 00:20:45.453 06:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:45.453 06:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:45.712 06:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:45.712 06:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.yYk ]] 00:20:45.712 06:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.yYk 00:20:45.712 06:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:45.712 06:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:45.712 06:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:45.712 06:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:20:45.712 06:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.pif 00:20:45.712 06:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:45.712 06:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:45.712 06:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:45.712 06:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.8F9 ]] 00:20:45.712 06:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.8F9 00:20:45.712 06:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:45.712 06:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:45.712 06:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:45.712 06:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:20:45.712 06:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.kKL 00:20:45.712 06:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:45.712 06:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:45.712 06:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:45.712 06:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.tZe ]] 00:20:45.712 06:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.tZe 00:20:45.712 06:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:45.712 06:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:45.712 06:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:45.712 06:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:20:45.712 06:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.fjH 00:20:45.712 06:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:45.712 06:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:45.712 06:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:45.712 06:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.mob ]] 00:20:45.712 06:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.mob 00:20:45.712 06:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:45.712 06:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:45.712 06:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:45.712 06:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:20:45.712 06:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.DvX 00:20:45.712 06:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:45.712 06:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:45.712 06:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:45.712 06:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:20:45.712 06:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:20:45.712 06:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:20:45.712 06:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:45.712 06:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:45.712 06:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:45.712 06:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:45.712 06:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:45.712 06:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:45.712 06:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:45.712 06:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:45.712 06:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:45.712 06:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:45.712 06:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:20:45.712 06:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:20:45.712 06:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@658 -- # nvmet=/sys/kernel/config/nvmet 00:20:45.712 06:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@659 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:20:45.712 06:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:20:45.712 06:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@661 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:20:45.712 06:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # local block nvme 00:20:45.712 06:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # [[ ! -e /sys/module/nvmet ]] 00:20:45.712 06:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@666 -- # modprobe nvmet 00:20:45.712 06:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ -e /sys/kernel/config/nvmet ]] 00:20:45.712 06:13:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:20:45.971 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:45.971 Waiting for block devices as requested 00:20:45.971 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:20:46.230 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:20:46.797 06:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@674 -- # for block in /sys/block/nvme* 00:20:46.797 06:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # [[ -e /sys/block/nvme0n1 ]] 00:20:46.797 06:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@676 -- # is_block_zoned nvme0n1 00:20:46.797 06:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:20:46.797 06:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:20:46.797 06:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:20:46.797 06:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # block_in_use nvme0n1 00:20:46.797 06:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:20:46.797 06:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:20:46.797 No valid GPT data, bailing 00:20:46.797 06:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:20:46.797 06:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:20:46.797 06:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:20:46.797 06:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # nvme=/dev/nvme0n1 00:20:46.797 06:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@674 -- # for block in /sys/block/nvme* 00:20:46.797 06:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # [[ -e /sys/block/nvme0n2 ]] 00:20:46.797 06:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@676 -- # is_block_zoned nvme0n2 00:20:46.797 06:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1648 -- # local device=nvme0n2 00:20:46.797 06:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:20:46.797 06:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:20:46.797 06:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # block_in_use nvme0n2 00:20:46.797 06:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:20:46.797 06:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:20:46.797 No valid GPT data, bailing 00:20:46.797 06:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:20:46.797 06:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:20:46.797 06:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:20:46.797 06:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # nvme=/dev/nvme0n2 00:20:46.797 06:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@674 -- # for block in /sys/block/nvme* 00:20:46.797 06:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # [[ -e /sys/block/nvme0n3 ]] 00:20:46.797 06:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@676 -- # is_block_zoned nvme0n3 00:20:46.797 06:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1648 -- # local device=nvme0n3 00:20:46.797 06:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:20:46.798 06:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:20:46.798 06:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # block_in_use nvme0n3 00:20:46.798 06:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:20:46.798 06:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:20:47.056 No valid GPT data, bailing 00:20:47.057 06:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:20:47.084 06:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:20:47.084 06:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:20:47.085 06:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # nvme=/dev/nvme0n3 00:20:47.085 06:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@674 -- # for block in /sys/block/nvme* 00:20:47.085 06:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # [[ -e /sys/block/nvme1n1 ]] 00:20:47.085 06:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@676 -- # is_block_zoned nvme1n1 00:20:47.085 06:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:20:47.085 06:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:20:47.085 06:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:20:47.085 06:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # block_in_use nvme1n1 00:20:47.085 06:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:20:47.085 06:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:20:47.085 No valid GPT data, bailing 00:20:47.085 06:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:20:47.085 06:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:20:47.085 06:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:20:47.085 06:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # nvme=/dev/nvme1n1 00:20:47.085 06:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # [[ -b /dev/nvme1n1 ]] 00:20:47.085 06:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@682 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:20:47.085 06:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@683 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:20:47.085 06:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:20:47.085 06:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@689 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:20:47.085 06:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@691 -- # echo 1 00:20:47.085 06:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@692 -- # echo /dev/nvme1n1 00:20:47.085 06:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo 1 00:20:47.085 06:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 10.0.0.1 00:20:47.085 06:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo tcp 00:20:47.085 06:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 4420 00:20:47.085 06:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@698 -- # echo ipv4 00:20:47.085 06:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:20:47.085 06:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 --hostid=a979a798-a221-4879-b3c4-5aaa753fde06 -a 10.0.0.1 -t tcp -s 4420 00:20:47.085 00:20:47.085 Discovery Log Number of Records 2, Generation counter 2 00:20:47.085 =====Discovery Log Entry 0====== 00:20:47.085 trtype: tcp 00:20:47.085 adrfam: ipv4 00:20:47.085 subtype: current discovery subsystem 00:20:47.085 treq: not specified, sq flow control disable supported 00:20:47.085 portid: 1 00:20:47.085 trsvcid: 4420 00:20:47.085 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:20:47.085 traddr: 10.0.0.1 00:20:47.085 eflags: none 00:20:47.085 sectype: none 00:20:47.085 =====Discovery Log Entry 1====== 00:20:47.085 trtype: tcp 00:20:47.085 adrfam: ipv4 00:20:47.085 subtype: nvme subsystem 00:20:47.085 treq: not specified, sq flow control disable supported 00:20:47.085 portid: 1 00:20:47.085 trsvcid: 4420 00:20:47.085 subnqn: nqn.2024-02.io.spdk:cnode0 00:20:47.085 traddr: 10.0.0.1 00:20:47.085 eflags: none 00:20:47.085 sectype: none 00:20:47.085 06:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:20:47.085 06:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:20:47.085 06:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:20:47.085 06:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:20:47.085 06:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:47.085 06:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:47.085 06:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:47.085 06:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:47.085 06:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Yjk5ZGEwMDIwNTNiYjQ0NWJjMTAzYjU1OGM2MmJkMmFkYjU3MjIwZDhlZjUyZGI4qHTPhQ==: 00:20:47.085 06:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTdhZjZiYTJkNjI5OGJjOGVmY2UxMDk3NjczZWY4MDlhOTcwZjRmODQyMDA3Nzc1OaTp4Q==: 00:20:47.085 06:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:47.085 06:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:47.344 06:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Yjk5ZGEwMDIwNTNiYjQ0NWJjMTAzYjU1OGM2MmJkMmFkYjU3MjIwZDhlZjUyZGI4qHTPhQ==: 00:20:47.344 06:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTdhZjZiYTJkNjI5OGJjOGVmY2UxMDk3NjczZWY4MDlhOTcwZjRmODQyMDA3Nzc1OaTp4Q==: ]] 00:20:47.344 06:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTdhZjZiYTJkNjI5OGJjOGVmY2UxMDk3NjczZWY4MDlhOTcwZjRmODQyMDA3Nzc1OaTp4Q==: 00:20:47.344 06:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:20:47.344 06:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:20:47.344 06:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:20:47.344 06:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:47.344 06:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:20:47.344 06:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:47.344 06:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:20:47.344 06:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:47.345 06:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:47.345 06:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:47.345 06:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:47.345 06:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.345 06:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:47.345 06:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.345 06:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:47.345 06:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:47.345 06:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:47.345 06:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:47.345 06:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:47.345 06:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:47.345 06:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:47.345 06:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:47.345 06:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:47.345 06:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:47.345 06:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:47.345 06:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:47.345 06:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.345 06:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:47.345 nvme0n1 00:20:47.345 06:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.345 06:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:47.345 06:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:47.345 06:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.345 06:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:47.345 06:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.345 06:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:47.345 06:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:47.345 06:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.345 06:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:47.345 06:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.345 06:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:20:47.345 06:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:47.345 06:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:47.345 06:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:20:47.345 06:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:47.345 06:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:47.345 06:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:47.345 06:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:47.345 06:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzFjNGU5NDFjNDM1MTllOTU3ZGQyYWYxMTRkNmI2Y2I50q+R: 00:20:47.345 06:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTliYjYwZGNkOTM0NzNjODIzMDliOTg2NmIzNzQwYjRkNjRmN2E4ZDBjNmVlNzQyMzFhOGUzNDA1ZDU1ZmE0NT3XilQ=: 00:20:47.345 06:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:47.345 06:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:47.345 06:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzFjNGU5NDFjNDM1MTllOTU3ZGQyYWYxMTRkNmI2Y2I50q+R: 00:20:47.345 06:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTliYjYwZGNkOTM0NzNjODIzMDliOTg2NmIzNzQwYjRkNjRmN2E4ZDBjNmVlNzQyMzFhOGUzNDA1ZDU1ZmE0NT3XilQ=: ]] 00:20:47.345 06:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTliYjYwZGNkOTM0NzNjODIzMDliOTg2NmIzNzQwYjRkNjRmN2E4ZDBjNmVlNzQyMzFhOGUzNDA1ZDU1ZmE0NT3XilQ=: 00:20:47.345 06:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:20:47.345 06:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:47.345 06:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:47.345 06:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:47.345 06:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:47.345 06:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:47.345 06:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:47.345 06:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.345 06:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:47.345 06:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.345 06:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:47.345 06:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:47.345 06:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:47.345 06:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:47.345 06:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:47.345 06:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:47.345 06:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:47.345 06:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:47.345 06:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:47.345 06:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:47.345 06:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:47.345 06:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:47.345 06:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.345 06:13:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:47.604 nvme0n1 00:20:47.604 06:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.604 06:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:47.604 06:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:47.604 06:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.604 06:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:47.604 06:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.604 06:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:47.604 06:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:47.604 06:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.604 06:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:47.604 06:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.604 06:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:47.604 06:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:20:47.604 06:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:47.604 06:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:47.604 06:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:47.604 06:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:47.604 06:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Yjk5ZGEwMDIwNTNiYjQ0NWJjMTAzYjU1OGM2MmJkMmFkYjU3MjIwZDhlZjUyZGI4qHTPhQ==: 00:20:47.604 06:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTdhZjZiYTJkNjI5OGJjOGVmY2UxMDk3NjczZWY4MDlhOTcwZjRmODQyMDA3Nzc1OaTp4Q==: 00:20:47.604 06:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:47.604 06:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:47.604 06:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Yjk5ZGEwMDIwNTNiYjQ0NWJjMTAzYjU1OGM2MmJkMmFkYjU3MjIwZDhlZjUyZGI4qHTPhQ==: 00:20:47.604 06:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTdhZjZiYTJkNjI5OGJjOGVmY2UxMDk3NjczZWY4MDlhOTcwZjRmODQyMDA3Nzc1OaTp4Q==: ]] 00:20:47.604 06:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTdhZjZiYTJkNjI5OGJjOGVmY2UxMDk3NjczZWY4MDlhOTcwZjRmODQyMDA3Nzc1OaTp4Q==: 00:20:47.604 06:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:20:47.604 06:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:47.604 06:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:47.604 06:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:47.604 06:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:47.604 06:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:47.604 06:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:47.605 06:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.605 06:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:47.605 06:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.605 06:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:47.605 06:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:47.605 06:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:47.605 06:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:47.605 06:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:47.605 06:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:47.605 06:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:47.605 06:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:47.605 06:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:47.605 06:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:47.605 06:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:47.605 06:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:47.605 06:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.605 06:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:47.605 nvme0n1 00:20:47.605 06:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.605 06:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:47.605 06:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:47.605 06:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.605 06:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:47.605 06:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.863 06:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:47.863 06:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:47.863 06:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.863 06:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:47.863 06:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.863 06:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:47.863 06:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:20:47.863 06:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:47.863 06:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:47.863 06:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:47.863 06:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:47.864 06:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTgxYmVmYzE3ZjFlNDE2NzRjMjdjMGVlOTcxMTQ4YTZQdiUD: 00:20:47.864 06:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWQwMGJiMzhiZjY3NzFiZjNmY2E3MzU3NDI3NGE4NmJnqIBh: 00:20:47.864 06:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:47.864 06:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:47.864 06:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTgxYmVmYzE3ZjFlNDE2NzRjMjdjMGVlOTcxMTQ4YTZQdiUD: 00:20:47.864 06:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWQwMGJiMzhiZjY3NzFiZjNmY2E3MzU3NDI3NGE4NmJnqIBh: ]] 00:20:47.864 06:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWQwMGJiMzhiZjY3NzFiZjNmY2E3MzU3NDI3NGE4NmJnqIBh: 00:20:47.864 06:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:20:47.864 06:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:47.864 06:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:47.864 06:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:47.864 06:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:47.864 06:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:47.864 06:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:47.864 06:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.864 06:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:47.864 06:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.864 06:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:47.864 06:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:47.864 06:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:47.864 06:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:47.864 06:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:47.864 06:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:47.864 06:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:47.864 06:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:47.864 06:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:47.864 06:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:47.864 06:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:47.864 06:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:47.864 06:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.864 06:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:47.864 nvme0n1 00:20:47.864 06:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.864 06:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:47.864 06:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:47.864 06:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.864 06:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:47.864 06:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.864 06:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:47.864 06:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:47.864 06:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.864 06:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:47.864 06:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.864 06:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:47.864 06:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:20:47.864 06:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:47.864 06:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:47.864 06:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:47.864 06:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:47.864 06:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWMxNTRiN2RlNzRlOTU3ZDAzYmE0OWZiY2FmNTA3Zjg3YjZkMWQ3NzVjMjBkOGIzLXKVCA==: 00:20:47.864 06:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzE2OGVlZDI4ZjIyMWY2MzBlM2QzYTMyNzZkMWNjMzRJBIs2: 00:20:47.864 06:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:47.864 06:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:47.864 06:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWMxNTRiN2RlNzRlOTU3ZDAzYmE0OWZiY2FmNTA3Zjg3YjZkMWQ3NzVjMjBkOGIzLXKVCA==: 00:20:47.864 06:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzE2OGVlZDI4ZjIyMWY2MzBlM2QzYTMyNzZkMWNjMzRJBIs2: ]] 00:20:47.864 06:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzE2OGVlZDI4ZjIyMWY2MzBlM2QzYTMyNzZkMWNjMzRJBIs2: 00:20:47.864 06:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:20:47.864 06:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:47.864 06:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:47.864 06:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:47.864 06:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:47.864 06:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:47.864 06:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:47.864 06:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.864 06:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:47.864 06:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.864 06:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:47.864 06:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:47.864 06:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:47.864 06:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:47.864 06:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:47.864 06:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:47.864 06:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:47.864 06:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:47.864 06:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:47.864 06:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:47.864 06:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:47.864 06:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:47.864 06:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.864 06:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:48.123 nvme0n1 00:20:48.123 06:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.123 06:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:48.123 06:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.123 06:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:48.123 06:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:48.123 06:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.123 06:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:48.123 06:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:48.123 06:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.123 06:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:48.123 06:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.123 06:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:48.123 06:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:20:48.123 06:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:48.123 06:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:48.123 06:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:48.123 06:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:48.123 06:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YzlkOTVkNGI2ZjAyY2ExNmFlYzBkZWIyNGFlMzc5ZGYwOWE4N2ZlNThjNTExMmM2ZWVjNDlkYzQxZmRjNTQ1Nuw26sc=: 00:20:48.123 06:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:48.123 06:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:48.123 06:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:48.123 06:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YzlkOTVkNGI2ZjAyY2ExNmFlYzBkZWIyNGFlMzc5ZGYwOWE4N2ZlNThjNTExMmM2ZWVjNDlkYzQxZmRjNTQ1Nuw26sc=: 00:20:48.123 06:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:48.123 06:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:20:48.123 06:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:48.123 06:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:48.123 06:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:48.123 06:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:48.123 06:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:48.123 06:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:48.123 06:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.123 06:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:48.123 06:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.123 06:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:48.123 06:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:48.123 06:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:48.123 06:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:48.123 06:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:48.124 06:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:48.124 06:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:48.124 06:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:48.124 06:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:48.124 06:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:48.124 06:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:48.124 06:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:48.124 06:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.124 06:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:48.124 nvme0n1 00:20:48.124 06:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.124 06:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:48.124 06:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:48.124 06:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.124 06:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:48.383 06:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.383 06:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:48.383 06:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:48.383 06:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.383 06:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:48.383 06:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.383 06:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:48.383 06:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:48.383 06:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:20:48.383 06:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:48.383 06:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:48.383 06:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:48.383 06:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:48.383 06:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzFjNGU5NDFjNDM1MTllOTU3ZGQyYWYxMTRkNmI2Y2I50q+R: 00:20:48.383 06:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTliYjYwZGNkOTM0NzNjODIzMDliOTg2NmIzNzQwYjRkNjRmN2E4ZDBjNmVlNzQyMzFhOGUzNDA1ZDU1ZmE0NT3XilQ=: 00:20:48.383 06:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:48.383 06:13:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:48.644 06:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzFjNGU5NDFjNDM1MTllOTU3ZGQyYWYxMTRkNmI2Y2I50q+R: 00:20:48.644 06:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTliYjYwZGNkOTM0NzNjODIzMDliOTg2NmIzNzQwYjRkNjRmN2E4ZDBjNmVlNzQyMzFhOGUzNDA1ZDU1ZmE0NT3XilQ=: ]] 00:20:48.644 06:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTliYjYwZGNkOTM0NzNjODIzMDliOTg2NmIzNzQwYjRkNjRmN2E4ZDBjNmVlNzQyMzFhOGUzNDA1ZDU1ZmE0NT3XilQ=: 00:20:48.644 06:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:20:48.644 06:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:48.644 06:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:48.644 06:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:48.644 06:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:48.644 06:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:48.644 06:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:48.644 06:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.644 06:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:48.644 06:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.644 06:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:48.644 06:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:48.644 06:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:48.644 06:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:48.644 06:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:48.644 06:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:48.644 06:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:48.644 06:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:48.644 06:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:48.644 06:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:48.644 06:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:48.644 06:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:48.644 06:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.644 06:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:48.644 nvme0n1 00:20:48.644 06:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.644 06:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:48.644 06:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:48.644 06:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.644 06:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:48.904 06:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.905 06:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:48.905 06:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:48.905 06:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.905 06:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:48.905 06:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.905 06:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:48.905 06:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:20:48.905 06:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:48.905 06:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:48.905 06:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:48.905 06:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:48.905 06:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Yjk5ZGEwMDIwNTNiYjQ0NWJjMTAzYjU1OGM2MmJkMmFkYjU3MjIwZDhlZjUyZGI4qHTPhQ==: 00:20:48.905 06:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTdhZjZiYTJkNjI5OGJjOGVmY2UxMDk3NjczZWY4MDlhOTcwZjRmODQyMDA3Nzc1OaTp4Q==: 00:20:48.905 06:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:48.905 06:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:48.905 06:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Yjk5ZGEwMDIwNTNiYjQ0NWJjMTAzYjU1OGM2MmJkMmFkYjU3MjIwZDhlZjUyZGI4qHTPhQ==: 00:20:48.905 06:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTdhZjZiYTJkNjI5OGJjOGVmY2UxMDk3NjczZWY4MDlhOTcwZjRmODQyMDA3Nzc1OaTp4Q==: ]] 00:20:48.905 06:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTdhZjZiYTJkNjI5OGJjOGVmY2UxMDk3NjczZWY4MDlhOTcwZjRmODQyMDA3Nzc1OaTp4Q==: 00:20:48.905 06:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:20:48.905 06:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:48.905 06:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:48.905 06:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:48.905 06:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:48.905 06:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:48.905 06:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:48.905 06:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.905 06:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:48.905 06:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.905 06:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:48.905 06:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:48.905 06:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:48.905 06:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:48.905 06:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:48.905 06:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:48.905 06:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:48.905 06:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:48.905 06:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:48.905 06:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:48.905 06:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:48.905 06:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:48.905 06:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.905 06:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:48.905 nvme0n1 00:20:48.905 06:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.905 06:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:48.905 06:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:48.905 06:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.905 06:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:48.905 06:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.905 06:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:48.905 06:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:48.905 06:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.905 06:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:48.905 06:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.905 06:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:48.905 06:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:20:48.905 06:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:48.905 06:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:48.905 06:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:48.905 06:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:48.905 06:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTgxYmVmYzE3ZjFlNDE2NzRjMjdjMGVlOTcxMTQ4YTZQdiUD: 00:20:48.905 06:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWQwMGJiMzhiZjY3NzFiZjNmY2E3MzU3NDI3NGE4NmJnqIBh: 00:20:48.905 06:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:48.905 06:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:48.905 06:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTgxYmVmYzE3ZjFlNDE2NzRjMjdjMGVlOTcxMTQ4YTZQdiUD: 00:20:48.905 06:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWQwMGJiMzhiZjY3NzFiZjNmY2E3MzU3NDI3NGE4NmJnqIBh: ]] 00:20:48.905 06:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWQwMGJiMzhiZjY3NzFiZjNmY2E3MzU3NDI3NGE4NmJnqIBh: 00:20:48.905 06:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:20:48.905 06:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:48.905 06:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:48.905 06:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:48.905 06:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:49.165 06:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:49.165 06:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:49.165 06:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:49.165 06:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:49.165 06:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:49.165 06:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:49.165 06:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:49.165 06:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:49.165 06:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:49.165 06:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:49.165 06:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:49.165 06:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:49.165 06:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:49.165 06:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:49.165 06:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:49.165 06:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:49.165 06:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:49.165 06:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:49.165 06:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:49.165 nvme0n1 00:20:49.165 06:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:49.165 06:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:49.165 06:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:49.165 06:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:49.165 06:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:49.165 06:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:49.165 06:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:49.165 06:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:49.165 06:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:49.165 06:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:49.165 06:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:49.165 06:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:49.165 06:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:20:49.165 06:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:49.165 06:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:49.165 06:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:49.165 06:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:49.165 06:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWMxNTRiN2RlNzRlOTU3ZDAzYmE0OWZiY2FmNTA3Zjg3YjZkMWQ3NzVjMjBkOGIzLXKVCA==: 00:20:49.165 06:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzE2OGVlZDI4ZjIyMWY2MzBlM2QzYTMyNzZkMWNjMzRJBIs2: 00:20:49.165 06:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:49.165 06:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:49.165 06:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWMxNTRiN2RlNzRlOTU3ZDAzYmE0OWZiY2FmNTA3Zjg3YjZkMWQ3NzVjMjBkOGIzLXKVCA==: 00:20:49.165 06:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzE2OGVlZDI4ZjIyMWY2MzBlM2QzYTMyNzZkMWNjMzRJBIs2: ]] 00:20:49.165 06:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzE2OGVlZDI4ZjIyMWY2MzBlM2QzYTMyNzZkMWNjMzRJBIs2: 00:20:49.165 06:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:20:49.165 06:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:49.165 06:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:49.165 06:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:49.165 06:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:49.165 06:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:49.165 06:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:49.165 06:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:49.165 06:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:49.165 06:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:49.165 06:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:49.165 06:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:49.165 06:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:49.165 06:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:49.165 06:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:49.165 06:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:49.165 06:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:49.165 06:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:49.165 06:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:49.165 06:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:49.165 06:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:49.165 06:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:49.165 06:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:49.165 06:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:49.424 nvme0n1 00:20:49.425 06:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:49.425 06:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:49.425 06:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:49.425 06:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:49.425 06:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:49.425 06:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:49.425 06:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:49.425 06:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:49.425 06:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:49.425 06:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:49.425 06:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:49.425 06:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:49.425 06:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:20:49.425 06:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:49.425 06:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:49.425 06:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:49.425 06:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:49.425 06:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YzlkOTVkNGI2ZjAyY2ExNmFlYzBkZWIyNGFlMzc5ZGYwOWE4N2ZlNThjNTExMmM2ZWVjNDlkYzQxZmRjNTQ1Nuw26sc=: 00:20:49.425 06:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:49.425 06:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:49.425 06:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:49.425 06:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YzlkOTVkNGI2ZjAyY2ExNmFlYzBkZWIyNGFlMzc5ZGYwOWE4N2ZlNThjNTExMmM2ZWVjNDlkYzQxZmRjNTQ1Nuw26sc=: 00:20:49.425 06:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:49.425 06:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:20:49.425 06:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:49.425 06:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:49.425 06:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:49.425 06:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:49.425 06:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:49.425 06:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:49.425 06:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:49.425 06:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:49.425 06:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:49.425 06:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:49.425 06:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:49.425 06:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:49.425 06:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:49.425 06:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:49.425 06:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:49.425 06:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:49.425 06:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:49.425 06:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:49.425 06:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:49.425 06:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:49.425 06:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:49.425 06:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:49.425 06:13:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:49.425 nvme0n1 00:20:49.425 06:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:49.684 06:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:49.684 06:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:49.684 06:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:49.684 06:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:49.684 06:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:49.684 06:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:49.684 06:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:49.684 06:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:49.684 06:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:49.684 06:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:49.684 06:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:49.684 06:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:49.684 06:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:20:49.684 06:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:49.684 06:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:49.684 06:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:49.684 06:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:49.684 06:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzFjNGU5NDFjNDM1MTllOTU3ZGQyYWYxMTRkNmI2Y2I50q+R: 00:20:49.684 06:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTliYjYwZGNkOTM0NzNjODIzMDliOTg2NmIzNzQwYjRkNjRmN2E4ZDBjNmVlNzQyMzFhOGUzNDA1ZDU1ZmE0NT3XilQ=: 00:20:49.684 06:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:49.684 06:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:50.252 06:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzFjNGU5NDFjNDM1MTllOTU3ZGQyYWYxMTRkNmI2Y2I50q+R: 00:20:50.252 06:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTliYjYwZGNkOTM0NzNjODIzMDliOTg2NmIzNzQwYjRkNjRmN2E4ZDBjNmVlNzQyMzFhOGUzNDA1ZDU1ZmE0NT3XilQ=: ]] 00:20:50.252 06:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTliYjYwZGNkOTM0NzNjODIzMDliOTg2NmIzNzQwYjRkNjRmN2E4ZDBjNmVlNzQyMzFhOGUzNDA1ZDU1ZmE0NT3XilQ=: 00:20:50.252 06:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:20:50.252 06:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:50.252 06:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:50.252 06:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:50.252 06:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:50.252 06:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:50.252 06:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:50.252 06:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:50.252 06:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:50.253 06:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:50.253 06:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:50.253 06:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:50.253 06:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:50.253 06:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:50.253 06:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:50.253 06:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:50.253 06:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:50.253 06:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:50.253 06:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:50.253 06:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:50.253 06:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:50.253 06:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:50.253 06:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:50.253 06:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:50.253 nvme0n1 00:20:50.253 06:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:50.253 06:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:50.253 06:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:50.253 06:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:50.253 06:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:50.253 06:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:50.253 06:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:50.253 06:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:50.253 06:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:50.253 06:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:50.511 06:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:50.512 06:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:50.512 06:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:20:50.512 06:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:50.512 06:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:50.512 06:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:50.512 06:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:50.512 06:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Yjk5ZGEwMDIwNTNiYjQ0NWJjMTAzYjU1OGM2MmJkMmFkYjU3MjIwZDhlZjUyZGI4qHTPhQ==: 00:20:50.512 06:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTdhZjZiYTJkNjI5OGJjOGVmY2UxMDk3NjczZWY4MDlhOTcwZjRmODQyMDA3Nzc1OaTp4Q==: 00:20:50.512 06:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:50.512 06:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:50.512 06:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Yjk5ZGEwMDIwNTNiYjQ0NWJjMTAzYjU1OGM2MmJkMmFkYjU3MjIwZDhlZjUyZGI4qHTPhQ==: 00:20:50.512 06:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTdhZjZiYTJkNjI5OGJjOGVmY2UxMDk3NjczZWY4MDlhOTcwZjRmODQyMDA3Nzc1OaTp4Q==: ]] 00:20:50.512 06:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTdhZjZiYTJkNjI5OGJjOGVmY2UxMDk3NjczZWY4MDlhOTcwZjRmODQyMDA3Nzc1OaTp4Q==: 00:20:50.512 06:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:20:50.512 06:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:50.512 06:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:50.512 06:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:50.512 06:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:50.512 06:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:50.512 06:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:50.512 06:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:50.512 06:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:50.512 06:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:50.512 06:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:50.512 06:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:50.512 06:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:50.512 06:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:50.512 06:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:50.512 06:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:50.512 06:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:50.512 06:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:50.512 06:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:50.512 06:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:50.512 06:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:50.512 06:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:50.512 06:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:50.512 06:13:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:50.512 nvme0n1 00:20:50.512 06:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:50.512 06:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:50.512 06:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:50.512 06:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:50.512 06:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:50.512 06:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:50.512 06:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:50.512 06:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:50.512 06:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:50.512 06:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:50.771 06:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:50.771 06:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:50.771 06:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:20:50.771 06:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:50.771 06:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:50.771 06:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:50.771 06:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:50.771 06:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTgxYmVmYzE3ZjFlNDE2NzRjMjdjMGVlOTcxMTQ4YTZQdiUD: 00:20:50.771 06:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWQwMGJiMzhiZjY3NzFiZjNmY2E3MzU3NDI3NGE4NmJnqIBh: 00:20:50.771 06:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:50.771 06:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:50.771 06:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTgxYmVmYzE3ZjFlNDE2NzRjMjdjMGVlOTcxMTQ4YTZQdiUD: 00:20:50.771 06:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWQwMGJiMzhiZjY3NzFiZjNmY2E3MzU3NDI3NGE4NmJnqIBh: ]] 00:20:50.771 06:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWQwMGJiMzhiZjY3NzFiZjNmY2E3MzU3NDI3NGE4NmJnqIBh: 00:20:50.771 06:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:20:50.771 06:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:50.771 06:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:50.771 06:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:50.771 06:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:50.771 06:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:50.771 06:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:50.771 06:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:50.771 06:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:50.771 06:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:50.771 06:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:50.771 06:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:50.771 06:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:50.771 06:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:50.771 06:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:50.771 06:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:50.771 06:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:50.771 06:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:50.771 06:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:50.771 06:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:50.771 06:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:50.771 06:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:50.771 06:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:50.771 06:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:50.771 nvme0n1 00:20:50.771 06:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:50.771 06:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:50.771 06:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:50.771 06:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:50.771 06:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:50.771 06:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:50.771 06:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:50.771 06:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:50.771 06:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:50.771 06:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:50.771 06:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:50.771 06:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:50.771 06:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:20:50.771 06:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:50.771 06:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:50.771 06:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:50.771 06:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:50.771 06:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWMxNTRiN2RlNzRlOTU3ZDAzYmE0OWZiY2FmNTA3Zjg3YjZkMWQ3NzVjMjBkOGIzLXKVCA==: 00:20:50.771 06:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzE2OGVlZDI4ZjIyMWY2MzBlM2QzYTMyNzZkMWNjMzRJBIs2: 00:20:50.771 06:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:50.771 06:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:50.771 06:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWMxNTRiN2RlNzRlOTU3ZDAzYmE0OWZiY2FmNTA3Zjg3YjZkMWQ3NzVjMjBkOGIzLXKVCA==: 00:20:51.030 06:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzE2OGVlZDI4ZjIyMWY2MzBlM2QzYTMyNzZkMWNjMzRJBIs2: ]] 00:20:51.030 06:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzE2OGVlZDI4ZjIyMWY2MzBlM2QzYTMyNzZkMWNjMzRJBIs2: 00:20:51.030 06:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:20:51.031 06:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:51.031 06:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:51.031 06:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:51.031 06:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:51.031 06:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:51.031 06:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:51.031 06:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:51.031 06:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:51.031 06:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:51.031 06:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:51.031 06:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:51.031 06:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:51.031 06:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:51.031 06:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:51.031 06:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:51.031 06:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:51.031 06:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:51.031 06:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:51.031 06:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:51.031 06:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:51.031 06:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:51.031 06:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:51.031 06:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:51.031 nvme0n1 00:20:51.031 06:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:51.031 06:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:51.031 06:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:51.031 06:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:51.031 06:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:51.031 06:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:51.031 06:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:51.031 06:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:51.031 06:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:51.031 06:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:51.031 06:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:51.031 06:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:51.031 06:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:20:51.031 06:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:51.031 06:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:51.031 06:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:51.031 06:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:51.031 06:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YzlkOTVkNGI2ZjAyY2ExNmFlYzBkZWIyNGFlMzc5ZGYwOWE4N2ZlNThjNTExMmM2ZWVjNDlkYzQxZmRjNTQ1Nuw26sc=: 00:20:51.031 06:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:51.031 06:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:51.031 06:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:51.031 06:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YzlkOTVkNGI2ZjAyY2ExNmFlYzBkZWIyNGFlMzc5ZGYwOWE4N2ZlNThjNTExMmM2ZWVjNDlkYzQxZmRjNTQ1Nuw26sc=: 00:20:51.031 06:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:51.031 06:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:20:51.031 06:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:51.031 06:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:51.031 06:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:51.031 06:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:51.031 06:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:51.031 06:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:51.031 06:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:51.031 06:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:51.291 06:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:51.291 06:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:51.291 06:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:51.291 06:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:51.291 06:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:51.291 06:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:51.291 06:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:51.291 06:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:51.291 06:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:51.291 06:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:51.291 06:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:51.291 06:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:51.291 06:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:51.291 06:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:51.291 06:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:51.291 nvme0n1 00:20:51.291 06:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:51.291 06:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:51.291 06:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:51.291 06:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:51.291 06:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:51.291 06:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:51.291 06:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:51.291 06:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:51.291 06:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:51.291 06:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:51.291 06:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:51.291 06:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:51.291 06:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:51.291 06:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:20:51.291 06:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:51.291 06:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:51.291 06:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:51.291 06:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:51.291 06:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzFjNGU5NDFjNDM1MTllOTU3ZGQyYWYxMTRkNmI2Y2I50q+R: 00:20:51.291 06:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTliYjYwZGNkOTM0NzNjODIzMDliOTg2NmIzNzQwYjRkNjRmN2E4ZDBjNmVlNzQyMzFhOGUzNDA1ZDU1ZmE0NT3XilQ=: 00:20:51.291 06:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:51.291 06:13:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:53.197 06:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzFjNGU5NDFjNDM1MTllOTU3ZGQyYWYxMTRkNmI2Y2I50q+R: 00:20:53.197 06:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTliYjYwZGNkOTM0NzNjODIzMDliOTg2NmIzNzQwYjRkNjRmN2E4ZDBjNmVlNzQyMzFhOGUzNDA1ZDU1ZmE0NT3XilQ=: ]] 00:20:53.197 06:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTliYjYwZGNkOTM0NzNjODIzMDliOTg2NmIzNzQwYjRkNjRmN2E4ZDBjNmVlNzQyMzFhOGUzNDA1ZDU1ZmE0NT3XilQ=: 00:20:53.197 06:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:20:53.197 06:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:53.197 06:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:53.197 06:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:53.197 06:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:53.197 06:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:53.197 06:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:53.197 06:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:53.197 06:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:53.197 06:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:53.197 06:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:53.197 06:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:53.197 06:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:53.197 06:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:53.197 06:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:53.197 06:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:53.197 06:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:53.197 06:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:53.197 06:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:53.197 06:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:53.197 06:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:53.197 06:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:53.197 06:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:53.197 06:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:53.197 nvme0n1 00:20:53.197 06:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:53.197 06:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:53.197 06:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:53.197 06:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:53.197 06:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:53.197 06:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:53.197 06:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:53.197 06:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:53.197 06:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:53.197 06:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:53.197 06:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:53.197 06:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:53.197 06:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:20:53.197 06:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:53.197 06:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:53.197 06:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:53.197 06:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:53.197 06:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Yjk5ZGEwMDIwNTNiYjQ0NWJjMTAzYjU1OGM2MmJkMmFkYjU3MjIwZDhlZjUyZGI4qHTPhQ==: 00:20:53.197 06:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTdhZjZiYTJkNjI5OGJjOGVmY2UxMDk3NjczZWY4MDlhOTcwZjRmODQyMDA3Nzc1OaTp4Q==: 00:20:53.197 06:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:53.197 06:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:53.197 06:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Yjk5ZGEwMDIwNTNiYjQ0NWJjMTAzYjU1OGM2MmJkMmFkYjU3MjIwZDhlZjUyZGI4qHTPhQ==: 00:20:53.197 06:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTdhZjZiYTJkNjI5OGJjOGVmY2UxMDk3NjczZWY4MDlhOTcwZjRmODQyMDA3Nzc1OaTp4Q==: ]] 00:20:53.197 06:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTdhZjZiYTJkNjI5OGJjOGVmY2UxMDk3NjczZWY4MDlhOTcwZjRmODQyMDA3Nzc1OaTp4Q==: 00:20:53.197 06:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:20:53.197 06:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:53.197 06:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:53.197 06:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:53.198 06:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:53.198 06:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:53.198 06:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:53.198 06:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:53.198 06:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:53.198 06:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:53.198 06:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:53.198 06:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:53.198 06:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:53.198 06:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:53.198 06:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:53.198 06:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:53.198 06:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:53.198 06:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:53.198 06:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:53.198 06:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:53.198 06:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:53.198 06:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:53.198 06:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:53.198 06:13:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:53.767 nvme0n1 00:20:53.767 06:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:53.767 06:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:53.767 06:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:53.767 06:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:53.767 06:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:53.767 06:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:53.767 06:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:53.767 06:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:53.767 06:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:53.767 06:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:53.768 06:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:53.768 06:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:53.768 06:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:20:53.768 06:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:53.768 06:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:53.768 06:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:53.768 06:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:53.768 06:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTgxYmVmYzE3ZjFlNDE2NzRjMjdjMGVlOTcxMTQ4YTZQdiUD: 00:20:53.768 06:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWQwMGJiMzhiZjY3NzFiZjNmY2E3MzU3NDI3NGE4NmJnqIBh: 00:20:53.768 06:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:53.768 06:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:53.768 06:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTgxYmVmYzE3ZjFlNDE2NzRjMjdjMGVlOTcxMTQ4YTZQdiUD: 00:20:53.768 06:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWQwMGJiMzhiZjY3NzFiZjNmY2E3MzU3NDI3NGE4NmJnqIBh: ]] 00:20:53.768 06:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWQwMGJiMzhiZjY3NzFiZjNmY2E3MzU3NDI3NGE4NmJnqIBh: 00:20:53.768 06:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:20:53.768 06:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:53.768 06:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:53.768 06:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:53.768 06:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:53.768 06:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:53.768 06:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:53.768 06:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:53.768 06:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:53.768 06:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:53.768 06:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:53.768 06:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:53.768 06:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:53.768 06:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:53.768 06:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:53.768 06:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:53.768 06:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:53.768 06:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:53.768 06:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:53.768 06:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:53.768 06:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:53.768 06:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:53.768 06:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:53.768 06:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:54.079 nvme0n1 00:20:54.079 06:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:54.079 06:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:54.079 06:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:54.080 06:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:54.080 06:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:54.080 06:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:54.080 06:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:54.080 06:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:54.080 06:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:54.080 06:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:54.080 06:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:54.080 06:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:54.080 06:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:20:54.080 06:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:54.080 06:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:54.080 06:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:54.080 06:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:54.080 06:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWMxNTRiN2RlNzRlOTU3ZDAzYmE0OWZiY2FmNTA3Zjg3YjZkMWQ3NzVjMjBkOGIzLXKVCA==: 00:20:54.080 06:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzE2OGVlZDI4ZjIyMWY2MzBlM2QzYTMyNzZkMWNjMzRJBIs2: 00:20:54.080 06:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:54.080 06:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:54.080 06:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWMxNTRiN2RlNzRlOTU3ZDAzYmE0OWZiY2FmNTA3Zjg3YjZkMWQ3NzVjMjBkOGIzLXKVCA==: 00:20:54.080 06:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzE2OGVlZDI4ZjIyMWY2MzBlM2QzYTMyNzZkMWNjMzRJBIs2: ]] 00:20:54.080 06:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzE2OGVlZDI4ZjIyMWY2MzBlM2QzYTMyNzZkMWNjMzRJBIs2: 00:20:54.080 06:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:20:54.080 06:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:54.080 06:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:54.080 06:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:54.080 06:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:54.080 06:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:54.080 06:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:54.080 06:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:54.080 06:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:54.080 06:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:54.080 06:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:54.080 06:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:54.080 06:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:54.080 06:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:54.080 06:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:54.080 06:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:54.080 06:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:54.080 06:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:54.080 06:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:54.080 06:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:54.080 06:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:54.080 06:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:54.080 06:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:54.080 06:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:54.364 nvme0n1 00:20:54.364 06:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:54.364 06:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:54.364 06:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:54.364 06:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:54.364 06:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:54.364 06:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:54.365 06:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:54.365 06:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:54.365 06:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:54.365 06:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:54.365 06:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:54.365 06:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:54.365 06:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:20:54.365 06:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:54.365 06:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:54.365 06:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:20:54.365 06:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:54.365 06:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YzlkOTVkNGI2ZjAyY2ExNmFlYzBkZWIyNGFlMzc5ZGYwOWE4N2ZlNThjNTExMmM2ZWVjNDlkYzQxZmRjNTQ1Nuw26sc=: 00:20:54.365 06:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:54.365 06:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:54.365 06:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:20:54.365 06:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YzlkOTVkNGI2ZjAyY2ExNmFlYzBkZWIyNGFlMzc5ZGYwOWE4N2ZlNThjNTExMmM2ZWVjNDlkYzQxZmRjNTQ1Nuw26sc=: 00:20:54.365 06:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:54.365 06:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:20:54.365 06:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:54.365 06:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:54.365 06:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:20:54.365 06:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:54.365 06:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:54.365 06:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:54.365 06:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:54.365 06:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:54.365 06:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:54.365 06:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:54.365 06:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:54.365 06:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:54.365 06:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:54.365 06:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:54.365 06:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:54.365 06:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:54.365 06:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:54.365 06:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:54.365 06:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:54.365 06:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:54.365 06:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:54.365 06:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:54.365 06:13:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:54.933 nvme0n1 00:20:54.933 06:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:54.933 06:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:54.933 06:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:54.933 06:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:54.933 06:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:54.933 06:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:54.933 06:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:54.933 06:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:54.933 06:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:54.933 06:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:54.933 06:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:54.933 06:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:54.933 06:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:54.933 06:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:20:54.933 06:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:54.933 06:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:54.933 06:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:54.933 06:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:54.933 06:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzFjNGU5NDFjNDM1MTllOTU3ZGQyYWYxMTRkNmI2Y2I50q+R: 00:20:54.933 06:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTliYjYwZGNkOTM0NzNjODIzMDliOTg2NmIzNzQwYjRkNjRmN2E4ZDBjNmVlNzQyMzFhOGUzNDA1ZDU1ZmE0NT3XilQ=: 00:20:54.933 06:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:54.933 06:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:54.933 06:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzFjNGU5NDFjNDM1MTllOTU3ZGQyYWYxMTRkNmI2Y2I50q+R: 00:20:54.933 06:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTliYjYwZGNkOTM0NzNjODIzMDliOTg2NmIzNzQwYjRkNjRmN2E4ZDBjNmVlNzQyMzFhOGUzNDA1ZDU1ZmE0NT3XilQ=: ]] 00:20:54.933 06:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTliYjYwZGNkOTM0NzNjODIzMDliOTg2NmIzNzQwYjRkNjRmN2E4ZDBjNmVlNzQyMzFhOGUzNDA1ZDU1ZmE0NT3XilQ=: 00:20:54.933 06:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:20:54.933 06:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:54.933 06:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:54.933 06:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:54.933 06:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:54.933 06:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:54.933 06:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:54.933 06:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:54.933 06:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:54.933 06:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:54.933 06:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:54.933 06:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:54.933 06:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:54.933 06:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:54.933 06:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:54.933 06:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:54.933 06:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:54.933 06:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:54.933 06:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:54.933 06:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:54.933 06:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:54.933 06:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:54.933 06:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:54.933 06:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:55.501 nvme0n1 00:20:55.501 06:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:55.501 06:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:55.501 06:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:55.501 06:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:55.501 06:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:55.501 06:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:55.501 06:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:55.501 06:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:55.501 06:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:55.501 06:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:55.501 06:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:55.501 06:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:55.501 06:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:20:55.501 06:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:55.501 06:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:55.501 06:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:55.501 06:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:55.501 06:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Yjk5ZGEwMDIwNTNiYjQ0NWJjMTAzYjU1OGM2MmJkMmFkYjU3MjIwZDhlZjUyZGI4qHTPhQ==: 00:20:55.501 06:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTdhZjZiYTJkNjI5OGJjOGVmY2UxMDk3NjczZWY4MDlhOTcwZjRmODQyMDA3Nzc1OaTp4Q==: 00:20:55.501 06:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:55.501 06:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:55.501 06:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Yjk5ZGEwMDIwNTNiYjQ0NWJjMTAzYjU1OGM2MmJkMmFkYjU3MjIwZDhlZjUyZGI4qHTPhQ==: 00:20:55.501 06:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTdhZjZiYTJkNjI5OGJjOGVmY2UxMDk3NjczZWY4MDlhOTcwZjRmODQyMDA3Nzc1OaTp4Q==: ]] 00:20:55.501 06:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTdhZjZiYTJkNjI5OGJjOGVmY2UxMDk3NjczZWY4MDlhOTcwZjRmODQyMDA3Nzc1OaTp4Q==: 00:20:55.501 06:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:20:55.501 06:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:55.501 06:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:55.501 06:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:55.501 06:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:55.501 06:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:55.501 06:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:55.501 06:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:55.501 06:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:55.501 06:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:55.501 06:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:55.501 06:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:55.501 06:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:55.501 06:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:55.501 06:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:55.501 06:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:55.501 06:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:55.501 06:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:55.501 06:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:55.501 06:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:55.501 06:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:55.501 06:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:55.501 06:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:55.501 06:13:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:56.068 nvme0n1 00:20:56.068 06:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:56.068 06:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:56.068 06:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:56.068 06:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:56.068 06:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:56.068 06:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:56.068 06:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:56.068 06:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:56.068 06:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:56.068 06:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:56.068 06:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:56.068 06:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:56.068 06:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:20:56.068 06:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:56.068 06:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:56.068 06:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:56.068 06:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:56.068 06:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTgxYmVmYzE3ZjFlNDE2NzRjMjdjMGVlOTcxMTQ4YTZQdiUD: 00:20:56.068 06:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWQwMGJiMzhiZjY3NzFiZjNmY2E3MzU3NDI3NGE4NmJnqIBh: 00:20:56.068 06:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:56.068 06:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:56.068 06:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTgxYmVmYzE3ZjFlNDE2NzRjMjdjMGVlOTcxMTQ4YTZQdiUD: 00:20:56.068 06:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWQwMGJiMzhiZjY3NzFiZjNmY2E3MzU3NDI3NGE4NmJnqIBh: ]] 00:20:56.068 06:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWQwMGJiMzhiZjY3NzFiZjNmY2E3MzU3NDI3NGE4NmJnqIBh: 00:20:56.068 06:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:20:56.068 06:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:56.068 06:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:56.068 06:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:56.068 06:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:56.068 06:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:56.068 06:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:56.068 06:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:56.068 06:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:56.068 06:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:56.068 06:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:56.068 06:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:56.068 06:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:56.068 06:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:56.068 06:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:56.068 06:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:56.068 06:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:56.068 06:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:56.068 06:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:56.068 06:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:56.068 06:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:56.068 06:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:56.068 06:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:56.069 06:13:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:56.635 nvme0n1 00:20:56.635 06:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:56.635 06:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:56.635 06:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:56.635 06:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:56.635 06:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:56.635 06:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:56.635 06:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:56.635 06:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:56.635 06:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:56.635 06:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:56.635 06:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:56.635 06:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:56.635 06:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:20:56.635 06:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:56.635 06:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:56.635 06:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:56.635 06:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:56.635 06:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWMxNTRiN2RlNzRlOTU3ZDAzYmE0OWZiY2FmNTA3Zjg3YjZkMWQ3NzVjMjBkOGIzLXKVCA==: 00:20:56.635 06:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzE2OGVlZDI4ZjIyMWY2MzBlM2QzYTMyNzZkMWNjMzRJBIs2: 00:20:56.636 06:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:56.636 06:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:56.636 06:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWMxNTRiN2RlNzRlOTU3ZDAzYmE0OWZiY2FmNTA3Zjg3YjZkMWQ3NzVjMjBkOGIzLXKVCA==: 00:20:56.636 06:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzE2OGVlZDI4ZjIyMWY2MzBlM2QzYTMyNzZkMWNjMzRJBIs2: ]] 00:20:56.636 06:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzE2OGVlZDI4ZjIyMWY2MzBlM2QzYTMyNzZkMWNjMzRJBIs2: 00:20:56.636 06:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:20:56.636 06:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:56.636 06:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:56.636 06:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:56.636 06:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:56.636 06:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:56.636 06:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:56.636 06:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:56.636 06:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:56.636 06:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:56.636 06:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:56.636 06:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:56.636 06:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:56.636 06:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:56.636 06:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:56.636 06:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:56.636 06:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:56.636 06:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:56.636 06:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:56.636 06:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:56.636 06:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:56.636 06:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:56.636 06:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:56.636 06:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:57.203 nvme0n1 00:20:57.203 06:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:57.203 06:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:57.203 06:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:57.203 06:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:57.203 06:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:57.203 06:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:57.203 06:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:57.203 06:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:57.203 06:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:57.203 06:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:57.203 06:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:57.203 06:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:57.203 06:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:20:57.203 06:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:57.203 06:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:20:57.203 06:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:20:57.203 06:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:57.203 06:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YzlkOTVkNGI2ZjAyY2ExNmFlYzBkZWIyNGFlMzc5ZGYwOWE4N2ZlNThjNTExMmM2ZWVjNDlkYzQxZmRjNTQ1Nuw26sc=: 00:20:57.203 06:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:57.203 06:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:20:57.203 06:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:20:57.203 06:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YzlkOTVkNGI2ZjAyY2ExNmFlYzBkZWIyNGFlMzc5ZGYwOWE4N2ZlNThjNTExMmM2ZWVjNDlkYzQxZmRjNTQ1Nuw26sc=: 00:20:57.203 06:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:57.203 06:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:20:57.203 06:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:57.203 06:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:20:57.203 06:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:20:57.203 06:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:57.203 06:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:57.203 06:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:57.203 06:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:57.203 06:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:57.203 06:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:57.203 06:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:57.203 06:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:57.203 06:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:57.203 06:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:57.203 06:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:57.203 06:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:57.203 06:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:57.203 06:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:57.203 06:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:57.203 06:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:57.203 06:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:57.203 06:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:57.203 06:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:57.203 06:13:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:57.771 nvme0n1 00:20:57.771 06:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:57.771 06:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:57.771 06:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:57.771 06:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:57.771 06:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:57.771 06:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:57.771 06:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:57.771 06:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:57.771 06:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:57.771 06:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:57.771 06:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:57.771 06:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:20:57.771 06:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:57.771 06:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:57.771 06:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:20:57.771 06:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:57.771 06:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:57.771 06:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:57.771 06:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:57.771 06:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzFjNGU5NDFjNDM1MTllOTU3ZGQyYWYxMTRkNmI2Y2I50q+R: 00:20:57.771 06:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTliYjYwZGNkOTM0NzNjODIzMDliOTg2NmIzNzQwYjRkNjRmN2E4ZDBjNmVlNzQyMzFhOGUzNDA1ZDU1ZmE0NT3XilQ=: 00:20:57.771 06:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:57.771 06:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:57.771 06:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzFjNGU5NDFjNDM1MTllOTU3ZGQyYWYxMTRkNmI2Y2I50q+R: 00:20:57.771 06:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTliYjYwZGNkOTM0NzNjODIzMDliOTg2NmIzNzQwYjRkNjRmN2E4ZDBjNmVlNzQyMzFhOGUzNDA1ZDU1ZmE0NT3XilQ=: ]] 00:20:57.771 06:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTliYjYwZGNkOTM0NzNjODIzMDliOTg2NmIzNzQwYjRkNjRmN2E4ZDBjNmVlNzQyMzFhOGUzNDA1ZDU1ZmE0NT3XilQ=: 00:20:57.771 06:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:20:57.771 06:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:57.771 06:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:57.771 06:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:57.771 06:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:57.771 06:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:57.771 06:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:57.771 06:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:57.771 06:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:57.771 06:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:57.771 06:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:57.771 06:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:57.771 06:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:57.771 06:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:57.771 06:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:57.771 06:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:57.771 06:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:57.771 06:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:57.771 06:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:57.771 06:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:57.771 06:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:57.771 06:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:57.771 06:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:57.771 06:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:58.031 nvme0n1 00:20:58.031 06:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:58.031 06:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:58.031 06:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:58.031 06:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:58.031 06:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:58.031 06:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:58.031 06:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:58.031 06:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:58.031 06:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:58.031 06:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:58.031 06:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:58.031 06:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:58.031 06:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:20:58.031 06:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:58.031 06:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:58.031 06:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:58.031 06:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:58.031 06:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Yjk5ZGEwMDIwNTNiYjQ0NWJjMTAzYjU1OGM2MmJkMmFkYjU3MjIwZDhlZjUyZGI4qHTPhQ==: 00:20:58.031 06:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTdhZjZiYTJkNjI5OGJjOGVmY2UxMDk3NjczZWY4MDlhOTcwZjRmODQyMDA3Nzc1OaTp4Q==: 00:20:58.031 06:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:58.031 06:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:58.031 06:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Yjk5ZGEwMDIwNTNiYjQ0NWJjMTAzYjU1OGM2MmJkMmFkYjU3MjIwZDhlZjUyZGI4qHTPhQ==: 00:20:58.031 06:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTdhZjZiYTJkNjI5OGJjOGVmY2UxMDk3NjczZWY4MDlhOTcwZjRmODQyMDA3Nzc1OaTp4Q==: ]] 00:20:58.031 06:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTdhZjZiYTJkNjI5OGJjOGVmY2UxMDk3NjczZWY4MDlhOTcwZjRmODQyMDA3Nzc1OaTp4Q==: 00:20:58.031 06:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:20:58.031 06:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:58.031 06:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:58.031 06:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:58.031 06:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:58.031 06:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:58.031 06:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:58.031 06:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:58.031 06:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:58.031 06:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:58.031 06:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:58.031 06:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:58.031 06:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:58.031 06:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:58.031 06:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:58.031 06:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:58.031 06:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:58.031 06:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:58.031 06:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:58.031 06:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:58.031 06:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:58.031 06:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:58.031 06:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:58.031 06:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:58.031 nvme0n1 00:20:58.031 06:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:58.031 06:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:58.031 06:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:58.031 06:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:58.031 06:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:58.031 06:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:58.291 06:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:58.291 06:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:58.291 06:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:58.291 06:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:58.291 06:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:58.291 06:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:58.291 06:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:20:58.291 06:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:58.291 06:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:58.291 06:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:58.291 06:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:58.291 06:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTgxYmVmYzE3ZjFlNDE2NzRjMjdjMGVlOTcxMTQ4YTZQdiUD: 00:20:58.291 06:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWQwMGJiMzhiZjY3NzFiZjNmY2E3MzU3NDI3NGE4NmJnqIBh: 00:20:58.291 06:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:58.291 06:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:58.291 06:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTgxYmVmYzE3ZjFlNDE2NzRjMjdjMGVlOTcxMTQ4YTZQdiUD: 00:20:58.291 06:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWQwMGJiMzhiZjY3NzFiZjNmY2E3MzU3NDI3NGE4NmJnqIBh: ]] 00:20:58.291 06:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWQwMGJiMzhiZjY3NzFiZjNmY2E3MzU3NDI3NGE4NmJnqIBh: 00:20:58.291 06:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:20:58.291 06:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:58.291 06:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:58.291 06:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:58.291 06:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:58.291 06:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:58.291 06:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:58.291 06:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:58.291 06:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:58.291 06:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:58.291 06:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:58.291 06:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:58.291 06:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:58.291 06:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:58.291 06:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:58.291 06:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:58.291 06:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:58.291 06:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:58.291 06:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:58.291 06:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:58.291 06:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:58.291 06:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:58.291 06:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:58.291 06:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:58.291 nvme0n1 00:20:58.291 06:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:58.291 06:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:58.291 06:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:58.291 06:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:58.291 06:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:58.291 06:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:58.291 06:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:58.291 06:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:58.291 06:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:58.291 06:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:58.291 06:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:58.291 06:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:58.291 06:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:20:58.291 06:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:58.291 06:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:58.291 06:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:58.291 06:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:58.291 06:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWMxNTRiN2RlNzRlOTU3ZDAzYmE0OWZiY2FmNTA3Zjg3YjZkMWQ3NzVjMjBkOGIzLXKVCA==: 00:20:58.291 06:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzE2OGVlZDI4ZjIyMWY2MzBlM2QzYTMyNzZkMWNjMzRJBIs2: 00:20:58.291 06:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:58.291 06:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:58.291 06:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWMxNTRiN2RlNzRlOTU3ZDAzYmE0OWZiY2FmNTA3Zjg3YjZkMWQ3NzVjMjBkOGIzLXKVCA==: 00:20:58.291 06:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzE2OGVlZDI4ZjIyMWY2MzBlM2QzYTMyNzZkMWNjMzRJBIs2: ]] 00:20:58.291 06:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzE2OGVlZDI4ZjIyMWY2MzBlM2QzYTMyNzZkMWNjMzRJBIs2: 00:20:58.291 06:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:20:58.291 06:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:58.291 06:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:58.291 06:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:58.291 06:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:58.292 06:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:58.292 06:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:58.292 06:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:58.292 06:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:58.292 06:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:58.292 06:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:58.292 06:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:58.292 06:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:58.292 06:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:58.292 06:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:58.292 06:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:58.292 06:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:58.292 06:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:58.292 06:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:58.292 06:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:58.292 06:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:58.292 06:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:58.292 06:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:58.292 06:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:58.551 nvme0n1 00:20:58.551 06:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:58.551 06:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:58.551 06:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:58.551 06:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:58.551 06:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:58.551 06:13:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:58.551 06:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:58.551 06:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:58.551 06:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:58.551 06:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:58.551 06:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:58.551 06:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:58.551 06:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:20:58.551 06:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:58.551 06:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:58.551 06:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:20:58.551 06:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:58.551 06:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YzlkOTVkNGI2ZjAyY2ExNmFlYzBkZWIyNGFlMzc5ZGYwOWE4N2ZlNThjNTExMmM2ZWVjNDlkYzQxZmRjNTQ1Nuw26sc=: 00:20:58.551 06:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:58.551 06:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:58.551 06:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:20:58.551 06:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YzlkOTVkNGI2ZjAyY2ExNmFlYzBkZWIyNGFlMzc5ZGYwOWE4N2ZlNThjNTExMmM2ZWVjNDlkYzQxZmRjNTQ1Nuw26sc=: 00:20:58.551 06:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:58.551 06:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:20:58.551 06:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:58.551 06:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:58.551 06:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:20:58.551 06:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:58.551 06:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:58.551 06:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:58.551 06:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:58.551 06:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:58.551 06:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:58.551 06:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:58.551 06:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:58.551 06:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:58.551 06:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:58.551 06:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:58.551 06:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:58.551 06:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:58.551 06:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:58.551 06:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:58.551 06:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:58.551 06:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:58.551 06:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:58.551 06:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:58.551 06:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:58.551 nvme0n1 00:20:58.551 06:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:58.551 06:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:58.551 06:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:58.551 06:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:58.551 06:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:58.810 06:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:58.810 06:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:58.810 06:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:58.810 06:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:58.810 06:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:58.810 06:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:58.810 06:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:58.810 06:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:58.810 06:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:20:58.810 06:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:58.810 06:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:58.810 06:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:58.810 06:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:58.810 06:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzFjNGU5NDFjNDM1MTllOTU3ZGQyYWYxMTRkNmI2Y2I50q+R: 00:20:58.810 06:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTliYjYwZGNkOTM0NzNjODIzMDliOTg2NmIzNzQwYjRkNjRmN2E4ZDBjNmVlNzQyMzFhOGUzNDA1ZDU1ZmE0NT3XilQ=: 00:20:58.810 06:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:58.810 06:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:58.810 06:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzFjNGU5NDFjNDM1MTllOTU3ZGQyYWYxMTRkNmI2Y2I50q+R: 00:20:58.810 06:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTliYjYwZGNkOTM0NzNjODIzMDliOTg2NmIzNzQwYjRkNjRmN2E4ZDBjNmVlNzQyMzFhOGUzNDA1ZDU1ZmE0NT3XilQ=: ]] 00:20:58.810 06:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTliYjYwZGNkOTM0NzNjODIzMDliOTg2NmIzNzQwYjRkNjRmN2E4ZDBjNmVlNzQyMzFhOGUzNDA1ZDU1ZmE0NT3XilQ=: 00:20:58.810 06:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:20:58.810 06:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:58.810 06:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:58.810 06:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:58.810 06:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:58.810 06:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:58.810 06:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:58.810 06:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:58.810 06:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:58.810 06:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:58.811 06:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:58.811 06:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:58.811 06:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:58.811 06:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:58.811 06:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:58.811 06:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:58.811 06:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:58.811 06:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:58.811 06:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:58.811 06:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:58.811 06:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:58.811 06:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:58.811 06:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:58.811 06:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:58.811 nvme0n1 00:20:58.811 06:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:58.811 06:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:58.811 06:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:58.811 06:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:58.811 06:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:58.811 06:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:58.811 06:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:58.811 06:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:58.811 06:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:58.811 06:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:58.811 06:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:58.811 06:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:59.069 06:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:20:59.069 06:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:59.069 06:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:59.069 06:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:59.069 06:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:20:59.069 06:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Yjk5ZGEwMDIwNTNiYjQ0NWJjMTAzYjU1OGM2MmJkMmFkYjU3MjIwZDhlZjUyZGI4qHTPhQ==: 00:20:59.069 06:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTdhZjZiYTJkNjI5OGJjOGVmY2UxMDk3NjczZWY4MDlhOTcwZjRmODQyMDA3Nzc1OaTp4Q==: 00:20:59.069 06:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:59.069 06:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:59.069 06:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Yjk5ZGEwMDIwNTNiYjQ0NWJjMTAzYjU1OGM2MmJkMmFkYjU3MjIwZDhlZjUyZGI4qHTPhQ==: 00:20:59.069 06:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTdhZjZiYTJkNjI5OGJjOGVmY2UxMDk3NjczZWY4MDlhOTcwZjRmODQyMDA3Nzc1OaTp4Q==: ]] 00:20:59.069 06:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTdhZjZiYTJkNjI5OGJjOGVmY2UxMDk3NjczZWY4MDlhOTcwZjRmODQyMDA3Nzc1OaTp4Q==: 00:20:59.069 06:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:20:59.069 06:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:59.069 06:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:59.069 06:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:59.069 06:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:20:59.069 06:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:59.069 06:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:59.069 06:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.069 06:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:59.069 06:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.069 06:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:59.069 06:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:59.069 06:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:59.069 06:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:59.069 06:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:59.069 06:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:59.069 06:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:59.069 06:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:59.069 06:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:59.069 06:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:59.070 06:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:59.070 06:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:59.070 06:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.070 06:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:59.070 nvme0n1 00:20:59.070 06:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.070 06:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:59.070 06:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:59.070 06:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.070 06:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:59.070 06:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.070 06:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:59.070 06:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:59.070 06:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.070 06:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:59.070 06:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.070 06:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:59.070 06:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:20:59.070 06:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:59.070 06:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:59.070 06:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:59.070 06:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:20:59.070 06:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTgxYmVmYzE3ZjFlNDE2NzRjMjdjMGVlOTcxMTQ4YTZQdiUD: 00:20:59.070 06:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWQwMGJiMzhiZjY3NzFiZjNmY2E3MzU3NDI3NGE4NmJnqIBh: 00:20:59.070 06:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:59.070 06:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:59.070 06:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTgxYmVmYzE3ZjFlNDE2NzRjMjdjMGVlOTcxMTQ4YTZQdiUD: 00:20:59.070 06:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWQwMGJiMzhiZjY3NzFiZjNmY2E3MzU3NDI3NGE4NmJnqIBh: ]] 00:20:59.070 06:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWQwMGJiMzhiZjY3NzFiZjNmY2E3MzU3NDI3NGE4NmJnqIBh: 00:20:59.070 06:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:20:59.070 06:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:59.070 06:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:59.070 06:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:59.070 06:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:20:59.070 06:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:59.070 06:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:59.070 06:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.070 06:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:59.070 06:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.070 06:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:59.070 06:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:59.070 06:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:59.070 06:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:59.070 06:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:59.070 06:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:59.070 06:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:59.070 06:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:59.070 06:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:59.070 06:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:59.070 06:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:59.070 06:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:59.070 06:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.070 06:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:59.329 nvme0n1 00:20:59.329 06:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.329 06:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:59.329 06:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:59.329 06:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.329 06:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:59.329 06:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.329 06:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:59.329 06:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:59.329 06:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.329 06:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:59.329 06:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.329 06:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:59.329 06:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:20:59.329 06:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:59.329 06:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:59.329 06:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:59.329 06:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:20:59.329 06:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWMxNTRiN2RlNzRlOTU3ZDAzYmE0OWZiY2FmNTA3Zjg3YjZkMWQ3NzVjMjBkOGIzLXKVCA==: 00:20:59.329 06:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzE2OGVlZDI4ZjIyMWY2MzBlM2QzYTMyNzZkMWNjMzRJBIs2: 00:20:59.329 06:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:59.329 06:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:59.329 06:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWMxNTRiN2RlNzRlOTU3ZDAzYmE0OWZiY2FmNTA3Zjg3YjZkMWQ3NzVjMjBkOGIzLXKVCA==: 00:20:59.329 06:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzE2OGVlZDI4ZjIyMWY2MzBlM2QzYTMyNzZkMWNjMzRJBIs2: ]] 00:20:59.329 06:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzE2OGVlZDI4ZjIyMWY2MzBlM2QzYTMyNzZkMWNjMzRJBIs2: 00:20:59.329 06:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:20:59.329 06:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:59.329 06:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:59.329 06:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:59.329 06:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:20:59.329 06:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:59.329 06:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:59.329 06:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.329 06:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:59.329 06:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.329 06:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:59.329 06:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:59.329 06:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:59.329 06:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:59.329 06:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:59.329 06:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:59.329 06:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:59.329 06:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:59.329 06:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:59.329 06:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:59.329 06:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:59.329 06:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:20:59.329 06:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.329 06:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:59.588 nvme0n1 00:20:59.588 06:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.588 06:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:59.588 06:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:59.588 06:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.588 06:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:59.588 06:13:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.588 06:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:59.588 06:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:59.588 06:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.588 06:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:59.588 06:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.588 06:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:59.588 06:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:20:59.588 06:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:59.588 06:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:59.588 06:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:20:59.588 06:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:20:59.588 06:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YzlkOTVkNGI2ZjAyY2ExNmFlYzBkZWIyNGFlMzc5ZGYwOWE4N2ZlNThjNTExMmM2ZWVjNDlkYzQxZmRjNTQ1Nuw26sc=: 00:20:59.588 06:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:20:59.588 06:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:59.588 06:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:20:59.588 06:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YzlkOTVkNGI2ZjAyY2ExNmFlYzBkZWIyNGFlMzc5ZGYwOWE4N2ZlNThjNTExMmM2ZWVjNDlkYzQxZmRjNTQ1Nuw26sc=: 00:20:59.588 06:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:20:59.588 06:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:20:59.588 06:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:59.588 06:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:59.588 06:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:20:59.588 06:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:20:59.588 06:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:59.588 06:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:59.588 06:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.588 06:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:59.588 06:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.588 06:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:59.588 06:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:59.588 06:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:59.588 06:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:59.589 06:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:59.589 06:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:59.589 06:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:59.589 06:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:59.589 06:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:59.589 06:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:59.589 06:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:59.589 06:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:20:59.589 06:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.589 06:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:59.589 nvme0n1 00:20:59.589 06:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.589 06:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:59.589 06:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.589 06:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:59.589 06:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:59.589 06:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.848 06:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:59.848 06:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:59.848 06:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.848 06:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:59.848 06:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.848 06:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:20:59.848 06:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:20:59.848 06:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:20:59.848 06:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:20:59.848 06:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:20:59.848 06:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:20:59.848 06:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:20:59.848 06:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzFjNGU5NDFjNDM1MTllOTU3ZGQyYWYxMTRkNmI2Y2I50q+R: 00:20:59.848 06:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTliYjYwZGNkOTM0NzNjODIzMDliOTg2NmIzNzQwYjRkNjRmN2E4ZDBjNmVlNzQyMzFhOGUzNDA1ZDU1ZmE0NT3XilQ=: 00:20:59.848 06:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:20:59.848 06:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:20:59.848 06:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzFjNGU5NDFjNDM1MTllOTU3ZGQyYWYxMTRkNmI2Y2I50q+R: 00:20:59.848 06:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTliYjYwZGNkOTM0NzNjODIzMDliOTg2NmIzNzQwYjRkNjRmN2E4ZDBjNmVlNzQyMzFhOGUzNDA1ZDU1ZmE0NT3XilQ=: ]] 00:20:59.848 06:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTliYjYwZGNkOTM0NzNjODIzMDliOTg2NmIzNzQwYjRkNjRmN2E4ZDBjNmVlNzQyMzFhOGUzNDA1ZDU1ZmE0NT3XilQ=: 00:20:59.848 06:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:20:59.848 06:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:20:59.848 06:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:20:59.848 06:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:20:59.848 06:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:20:59.848 06:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:20:59.848 06:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:59.848 06:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.848 06:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:59.848 06:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.848 06:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:20:59.848 06:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:20:59.848 06:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:20:59.848 06:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:20:59.848 06:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:59.848 06:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:59.848 06:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:20:59.848 06:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:59.848 06:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:20:59.848 06:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:20:59.848 06:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:20:59.848 06:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:59.848 06:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.848 06:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:59.848 nvme0n1 00:20:59.848 06:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.848 06:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:20:59.848 06:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.848 06:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:20:59.848 06:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:20:59.848 06:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:59.848 06:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:59.848 06:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:59.848 06:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:59.848 06:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:00.107 06:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:00.107 06:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:00.107 06:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:21:00.107 06:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:00.107 06:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:00.107 06:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:21:00.107 06:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:21:00.107 06:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Yjk5ZGEwMDIwNTNiYjQ0NWJjMTAzYjU1OGM2MmJkMmFkYjU3MjIwZDhlZjUyZGI4qHTPhQ==: 00:21:00.107 06:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTdhZjZiYTJkNjI5OGJjOGVmY2UxMDk3NjczZWY4MDlhOTcwZjRmODQyMDA3Nzc1OaTp4Q==: 00:21:00.107 06:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:00.107 06:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:21:00.107 06:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Yjk5ZGEwMDIwNTNiYjQ0NWJjMTAzYjU1OGM2MmJkMmFkYjU3MjIwZDhlZjUyZGI4qHTPhQ==: 00:21:00.107 06:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTdhZjZiYTJkNjI5OGJjOGVmY2UxMDk3NjczZWY4MDlhOTcwZjRmODQyMDA3Nzc1OaTp4Q==: ]] 00:21:00.107 06:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTdhZjZiYTJkNjI5OGJjOGVmY2UxMDk3NjczZWY4MDlhOTcwZjRmODQyMDA3Nzc1OaTp4Q==: 00:21:00.107 06:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:21:00.107 06:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:00.107 06:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:00.107 06:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:21:00.107 06:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:21:00.107 06:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:00.107 06:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:00.107 06:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:00.107 06:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:00.107 06:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:00.107 06:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:00.107 06:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:00.107 06:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:00.107 06:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:00.107 06:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:00.107 06:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:00.107 06:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:00.107 06:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:00.107 06:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:00.107 06:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:00.108 06:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:00.108 06:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:00.108 06:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:00.108 06:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:00.108 nvme0n1 00:21:00.108 06:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:00.108 06:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:00.108 06:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:00.108 06:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:00.108 06:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:00.108 06:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:00.108 06:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:00.108 06:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:00.108 06:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:00.108 06:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:00.367 06:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:00.367 06:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:00.367 06:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:21:00.367 06:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:00.367 06:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:00.367 06:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:21:00.367 06:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:21:00.367 06:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTgxYmVmYzE3ZjFlNDE2NzRjMjdjMGVlOTcxMTQ4YTZQdiUD: 00:21:00.367 06:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWQwMGJiMzhiZjY3NzFiZjNmY2E3MzU3NDI3NGE4NmJnqIBh: 00:21:00.367 06:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:00.367 06:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:21:00.367 06:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTgxYmVmYzE3ZjFlNDE2NzRjMjdjMGVlOTcxMTQ4YTZQdiUD: 00:21:00.367 06:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWQwMGJiMzhiZjY3NzFiZjNmY2E3MzU3NDI3NGE4NmJnqIBh: ]] 00:21:00.367 06:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWQwMGJiMzhiZjY3NzFiZjNmY2E3MzU3NDI3NGE4NmJnqIBh: 00:21:00.367 06:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:21:00.367 06:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:00.367 06:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:00.367 06:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:21:00.367 06:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:21:00.367 06:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:00.367 06:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:00.367 06:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:00.367 06:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:00.367 06:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:00.367 06:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:00.367 06:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:00.367 06:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:00.367 06:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:00.367 06:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:00.367 06:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:00.367 06:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:00.367 06:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:00.367 06:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:00.367 06:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:00.367 06:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:00.367 06:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:00.367 06:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:00.367 06:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:00.367 nvme0n1 00:21:00.367 06:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:00.367 06:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:00.367 06:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:00.367 06:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:00.367 06:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:00.367 06:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:00.626 06:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:00.626 06:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:00.626 06:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:00.626 06:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:00.626 06:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:00.626 06:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:00.626 06:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:21:00.626 06:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:00.626 06:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:00.626 06:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:21:00.626 06:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:21:00.626 06:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWMxNTRiN2RlNzRlOTU3ZDAzYmE0OWZiY2FmNTA3Zjg3YjZkMWQ3NzVjMjBkOGIzLXKVCA==: 00:21:00.626 06:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzE2OGVlZDI4ZjIyMWY2MzBlM2QzYTMyNzZkMWNjMzRJBIs2: 00:21:00.626 06:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:00.626 06:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:21:00.626 06:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWMxNTRiN2RlNzRlOTU3ZDAzYmE0OWZiY2FmNTA3Zjg3YjZkMWQ3NzVjMjBkOGIzLXKVCA==: 00:21:00.626 06:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzE2OGVlZDI4ZjIyMWY2MzBlM2QzYTMyNzZkMWNjMzRJBIs2: ]] 00:21:00.626 06:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzE2OGVlZDI4ZjIyMWY2MzBlM2QzYTMyNzZkMWNjMzRJBIs2: 00:21:00.626 06:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:21:00.626 06:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:00.626 06:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:00.626 06:13:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:21:00.626 06:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:21:00.626 06:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:00.626 06:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:00.626 06:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:00.626 06:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:00.626 06:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:00.626 06:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:00.626 06:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:00.626 06:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:00.626 06:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:00.626 06:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:00.626 06:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:00.626 06:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:00.626 06:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:00.626 06:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:00.626 06:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:00.626 06:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:00.626 06:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:21:00.626 06:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:00.626 06:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:00.626 nvme0n1 00:21:00.626 06:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:00.626 06:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:00.626 06:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:00.626 06:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:00.626 06:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:00.626 06:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:00.626 06:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:00.626 06:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:00.626 06:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:00.627 06:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:00.886 06:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:00.886 06:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:00.886 06:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:21:00.886 06:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:00.886 06:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:00.886 06:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:21:00.886 06:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:21:00.886 06:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YzlkOTVkNGI2ZjAyY2ExNmFlYzBkZWIyNGFlMzc5ZGYwOWE4N2ZlNThjNTExMmM2ZWVjNDlkYzQxZmRjNTQ1Nuw26sc=: 00:21:00.886 06:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:21:00.886 06:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:00.886 06:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:21:00.886 06:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YzlkOTVkNGI2ZjAyY2ExNmFlYzBkZWIyNGFlMzc5ZGYwOWE4N2ZlNThjNTExMmM2ZWVjNDlkYzQxZmRjNTQ1Nuw26sc=: 00:21:00.886 06:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:21:00.886 06:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:21:00.886 06:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:00.886 06:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:00.886 06:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:21:00.886 06:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:21:00.886 06:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:00.886 06:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:00.886 06:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:00.886 06:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:00.886 06:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:00.886 06:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:00.886 06:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:00.886 06:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:00.886 06:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:00.886 06:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:00.886 06:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:00.886 06:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:00.886 06:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:00.886 06:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:00.886 06:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:00.886 06:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:00.886 06:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:21:00.886 06:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:00.886 06:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:00.886 nvme0n1 00:21:00.886 06:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:00.886 06:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:00.886 06:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:00.886 06:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:00.886 06:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:00.886 06:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:00.886 06:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:00.886 06:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:00.886 06:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:00.886 06:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:01.145 06:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:01.145 06:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:21:01.145 06:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:01.145 06:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:21:01.145 06:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:01.145 06:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:01.145 06:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:21:01.145 06:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:21:01.145 06:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzFjNGU5NDFjNDM1MTllOTU3ZGQyYWYxMTRkNmI2Y2I50q+R: 00:21:01.145 06:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTliYjYwZGNkOTM0NzNjODIzMDliOTg2NmIzNzQwYjRkNjRmN2E4ZDBjNmVlNzQyMzFhOGUzNDA1ZDU1ZmE0NT3XilQ=: 00:21:01.145 06:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:01.145 06:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:21:01.145 06:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzFjNGU5NDFjNDM1MTllOTU3ZGQyYWYxMTRkNmI2Y2I50q+R: 00:21:01.145 06:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTliYjYwZGNkOTM0NzNjODIzMDliOTg2NmIzNzQwYjRkNjRmN2E4ZDBjNmVlNzQyMzFhOGUzNDA1ZDU1ZmE0NT3XilQ=: ]] 00:21:01.145 06:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTliYjYwZGNkOTM0NzNjODIzMDliOTg2NmIzNzQwYjRkNjRmN2E4ZDBjNmVlNzQyMzFhOGUzNDA1ZDU1ZmE0NT3XilQ=: 00:21:01.145 06:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:21:01.145 06:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:01.145 06:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:01.145 06:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:21:01.145 06:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:21:01.145 06:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:01.145 06:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:01.145 06:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:01.145 06:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:01.145 06:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:01.145 06:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:01.145 06:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:01.145 06:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:01.145 06:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:01.145 06:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:01.145 06:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:01.145 06:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:01.145 06:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:01.145 06:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:01.145 06:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:01.145 06:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:01.146 06:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:01.146 06:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:01.146 06:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:01.405 nvme0n1 00:21:01.405 06:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:01.405 06:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:01.405 06:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:01.405 06:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:01.405 06:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:01.405 06:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:01.405 06:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:01.405 06:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:01.405 06:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:01.405 06:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:01.405 06:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:01.405 06:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:01.405 06:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:21:01.405 06:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:01.405 06:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:01.405 06:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:21:01.405 06:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:21:01.405 06:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Yjk5ZGEwMDIwNTNiYjQ0NWJjMTAzYjU1OGM2MmJkMmFkYjU3MjIwZDhlZjUyZGI4qHTPhQ==: 00:21:01.405 06:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTdhZjZiYTJkNjI5OGJjOGVmY2UxMDk3NjczZWY4MDlhOTcwZjRmODQyMDA3Nzc1OaTp4Q==: 00:21:01.405 06:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:01.405 06:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:21:01.405 06:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Yjk5ZGEwMDIwNTNiYjQ0NWJjMTAzYjU1OGM2MmJkMmFkYjU3MjIwZDhlZjUyZGI4qHTPhQ==: 00:21:01.405 06:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTdhZjZiYTJkNjI5OGJjOGVmY2UxMDk3NjczZWY4MDlhOTcwZjRmODQyMDA3Nzc1OaTp4Q==: ]] 00:21:01.405 06:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTdhZjZiYTJkNjI5OGJjOGVmY2UxMDk3NjczZWY4MDlhOTcwZjRmODQyMDA3Nzc1OaTp4Q==: 00:21:01.405 06:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:21:01.405 06:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:01.405 06:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:01.405 06:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:21:01.405 06:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:21:01.405 06:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:01.405 06:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:01.405 06:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:01.405 06:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:01.405 06:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:01.405 06:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:01.405 06:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:01.405 06:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:01.405 06:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:01.405 06:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:01.405 06:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:01.405 06:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:01.405 06:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:01.405 06:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:01.405 06:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:01.405 06:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:01.405 06:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:01.405 06:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:01.405 06:13:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:01.664 nvme0n1 00:21:01.664 06:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:01.664 06:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:01.664 06:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:01.664 06:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:01.664 06:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:01.664 06:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:01.664 06:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:01.664 06:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:01.664 06:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:01.664 06:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:01.923 06:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:01.923 06:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:01.923 06:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:21:01.923 06:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:01.923 06:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:01.923 06:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:21:01.923 06:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:21:01.923 06:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTgxYmVmYzE3ZjFlNDE2NzRjMjdjMGVlOTcxMTQ4YTZQdiUD: 00:21:01.923 06:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWQwMGJiMzhiZjY3NzFiZjNmY2E3MzU3NDI3NGE4NmJnqIBh: 00:21:01.923 06:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:01.923 06:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:21:01.923 06:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTgxYmVmYzE3ZjFlNDE2NzRjMjdjMGVlOTcxMTQ4YTZQdiUD: 00:21:01.923 06:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWQwMGJiMzhiZjY3NzFiZjNmY2E3MzU3NDI3NGE4NmJnqIBh: ]] 00:21:01.923 06:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWQwMGJiMzhiZjY3NzFiZjNmY2E3MzU3NDI3NGE4NmJnqIBh: 00:21:01.924 06:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:21:01.924 06:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:01.924 06:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:01.924 06:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:21:01.924 06:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:21:01.924 06:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:01.924 06:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:01.924 06:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:01.924 06:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:01.924 06:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:01.924 06:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:01.924 06:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:01.924 06:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:01.924 06:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:01.924 06:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:01.924 06:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:01.924 06:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:01.924 06:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:01.924 06:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:01.924 06:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:01.924 06:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:01.924 06:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:01.924 06:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:01.924 06:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:02.183 nvme0n1 00:21:02.183 06:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.183 06:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:02.183 06:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.183 06:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:02.183 06:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:02.183 06:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.183 06:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:02.183 06:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:02.183 06:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.183 06:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:02.183 06:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.183 06:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:02.183 06:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:21:02.183 06:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:02.183 06:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:02.183 06:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:21:02.183 06:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:21:02.183 06:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWMxNTRiN2RlNzRlOTU3ZDAzYmE0OWZiY2FmNTA3Zjg3YjZkMWQ3NzVjMjBkOGIzLXKVCA==: 00:21:02.183 06:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzE2OGVlZDI4ZjIyMWY2MzBlM2QzYTMyNzZkMWNjMzRJBIs2: 00:21:02.183 06:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:02.183 06:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:21:02.183 06:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWMxNTRiN2RlNzRlOTU3ZDAzYmE0OWZiY2FmNTA3Zjg3YjZkMWQ3NzVjMjBkOGIzLXKVCA==: 00:21:02.183 06:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzE2OGVlZDI4ZjIyMWY2MzBlM2QzYTMyNzZkMWNjMzRJBIs2: ]] 00:21:02.183 06:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzE2OGVlZDI4ZjIyMWY2MzBlM2QzYTMyNzZkMWNjMzRJBIs2: 00:21:02.183 06:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:21:02.183 06:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:02.183 06:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:02.183 06:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:21:02.183 06:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:21:02.183 06:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:02.183 06:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:02.183 06:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.183 06:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:02.183 06:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.183 06:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:02.183 06:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:02.183 06:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:02.183 06:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:02.183 06:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:02.183 06:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:02.183 06:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:02.183 06:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:02.183 06:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:02.183 06:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:02.183 06:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:02.183 06:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:21:02.183 06:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.183 06:13:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:02.442 nvme0n1 00:21:02.442 06:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.442 06:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:02.442 06:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:02.442 06:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.442 06:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:02.442 06:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.701 06:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:02.701 06:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:02.701 06:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.701 06:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:02.701 06:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.701 06:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:02.701 06:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:21:02.701 06:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:02.701 06:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:02.701 06:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:21:02.701 06:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:21:02.701 06:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YzlkOTVkNGI2ZjAyY2ExNmFlYzBkZWIyNGFlMzc5ZGYwOWE4N2ZlNThjNTExMmM2ZWVjNDlkYzQxZmRjNTQ1Nuw26sc=: 00:21:02.702 06:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:21:02.702 06:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:02.702 06:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:21:02.702 06:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YzlkOTVkNGI2ZjAyY2ExNmFlYzBkZWIyNGFlMzc5ZGYwOWE4N2ZlNThjNTExMmM2ZWVjNDlkYzQxZmRjNTQ1Nuw26sc=: 00:21:02.702 06:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:21:02.702 06:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:21:02.702 06:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:02.702 06:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:02.702 06:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:21:02.702 06:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:21:02.702 06:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:02.702 06:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:02.702 06:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.702 06:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:02.702 06:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.702 06:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:02.702 06:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:02.702 06:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:02.702 06:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:02.702 06:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:02.702 06:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:02.702 06:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:02.702 06:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:02.702 06:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:02.702 06:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:02.702 06:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:02.702 06:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:21:02.702 06:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.702 06:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:02.961 nvme0n1 00:21:02.961 06:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.961 06:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:02.961 06:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:02.961 06:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.961 06:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:02.961 06:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.961 06:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:02.961 06:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:02.961 06:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.961 06:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:02.961 06:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.961 06:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:21:02.961 06:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:02.961 06:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:21:02.961 06:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:02.961 06:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:02.961 06:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:21:02.961 06:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:21:02.961 06:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzFjNGU5NDFjNDM1MTllOTU3ZGQyYWYxMTRkNmI2Y2I50q+R: 00:21:02.961 06:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTliYjYwZGNkOTM0NzNjODIzMDliOTg2NmIzNzQwYjRkNjRmN2E4ZDBjNmVlNzQyMzFhOGUzNDA1ZDU1ZmE0NT3XilQ=: 00:21:02.961 06:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:02.961 06:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:21:02.961 06:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzFjNGU5NDFjNDM1MTllOTU3ZGQyYWYxMTRkNmI2Y2I50q+R: 00:21:02.961 06:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTliYjYwZGNkOTM0NzNjODIzMDliOTg2NmIzNzQwYjRkNjRmN2E4ZDBjNmVlNzQyMzFhOGUzNDA1ZDU1ZmE0NT3XilQ=: ]] 00:21:02.961 06:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTliYjYwZGNkOTM0NzNjODIzMDliOTg2NmIzNzQwYjRkNjRmN2E4ZDBjNmVlNzQyMzFhOGUzNDA1ZDU1ZmE0NT3XilQ=: 00:21:02.961 06:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:21:02.961 06:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:02.961 06:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:02.961 06:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:21:02.961 06:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:21:02.961 06:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:02.961 06:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:02.962 06:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.962 06:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:02.962 06:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.962 06:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:02.962 06:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:02.962 06:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:02.962 06:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:02.962 06:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:02.962 06:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:02.962 06:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:02.962 06:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:02.962 06:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:02.962 06:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:02.962 06:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:02.962 06:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:02.962 06:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.962 06:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:03.530 nvme0n1 00:21:03.530 06:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.530 06:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:03.530 06:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.530 06:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:03.530 06:13:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:03.530 06:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.530 06:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:03.530 06:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:03.530 06:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.530 06:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:03.530 06:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.530 06:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:03.530 06:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:21:03.530 06:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:03.530 06:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:03.530 06:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:21:03.530 06:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:21:03.530 06:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Yjk5ZGEwMDIwNTNiYjQ0NWJjMTAzYjU1OGM2MmJkMmFkYjU3MjIwZDhlZjUyZGI4qHTPhQ==: 00:21:03.530 06:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTdhZjZiYTJkNjI5OGJjOGVmY2UxMDk3NjczZWY4MDlhOTcwZjRmODQyMDA3Nzc1OaTp4Q==: 00:21:03.530 06:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:03.530 06:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:21:03.530 06:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Yjk5ZGEwMDIwNTNiYjQ0NWJjMTAzYjU1OGM2MmJkMmFkYjU3MjIwZDhlZjUyZGI4qHTPhQ==: 00:21:03.530 06:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTdhZjZiYTJkNjI5OGJjOGVmY2UxMDk3NjczZWY4MDlhOTcwZjRmODQyMDA3Nzc1OaTp4Q==: ]] 00:21:03.530 06:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTdhZjZiYTJkNjI5OGJjOGVmY2UxMDk3NjczZWY4MDlhOTcwZjRmODQyMDA3Nzc1OaTp4Q==: 00:21:03.530 06:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:21:03.530 06:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:03.530 06:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:03.530 06:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:21:03.530 06:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:21:03.530 06:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:03.530 06:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:03.530 06:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.530 06:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:03.530 06:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:03.530 06:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:03.530 06:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:03.530 06:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:03.530 06:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:03.530 06:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:03.530 06:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:03.530 06:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:03.530 06:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:03.530 06:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:03.530 06:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:03.530 06:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:03.530 06:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:03.530 06:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:03.530 06:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:04.098 nvme0n1 00:21:04.098 06:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.098 06:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:04.098 06:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:04.098 06:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.098 06:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:04.098 06:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.098 06:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:04.098 06:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:04.098 06:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.098 06:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:04.098 06:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.098 06:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:04.098 06:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:21:04.098 06:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:04.098 06:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:04.098 06:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:21:04.098 06:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:21:04.098 06:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTgxYmVmYzE3ZjFlNDE2NzRjMjdjMGVlOTcxMTQ4YTZQdiUD: 00:21:04.098 06:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWQwMGJiMzhiZjY3NzFiZjNmY2E3MzU3NDI3NGE4NmJnqIBh: 00:21:04.098 06:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:04.098 06:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:21:04.098 06:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTgxYmVmYzE3ZjFlNDE2NzRjMjdjMGVlOTcxMTQ4YTZQdiUD: 00:21:04.098 06:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWQwMGJiMzhiZjY3NzFiZjNmY2E3MzU3NDI3NGE4NmJnqIBh: ]] 00:21:04.098 06:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWQwMGJiMzhiZjY3NzFiZjNmY2E3MzU3NDI3NGE4NmJnqIBh: 00:21:04.098 06:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:21:04.098 06:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:04.098 06:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:04.098 06:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:21:04.098 06:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:21:04.098 06:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:04.098 06:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:04.098 06:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.098 06:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:04.098 06:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.098 06:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:04.099 06:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:04.099 06:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:04.099 06:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:04.099 06:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:04.099 06:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:04.099 06:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:04.099 06:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:04.099 06:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:04.099 06:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:04.099 06:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:04.099 06:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:04.099 06:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.099 06:13:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:04.666 nvme0n1 00:21:04.666 06:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.666 06:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:04.666 06:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:04.666 06:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.666 06:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:04.666 06:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.666 06:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:04.666 06:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:04.666 06:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.666 06:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:04.666 06:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.666 06:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:04.666 06:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:21:04.666 06:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:04.666 06:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:04.666 06:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:21:04.666 06:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:21:04.666 06:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWMxNTRiN2RlNzRlOTU3ZDAzYmE0OWZiY2FmNTA3Zjg3YjZkMWQ3NzVjMjBkOGIzLXKVCA==: 00:21:04.666 06:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzE2OGVlZDI4ZjIyMWY2MzBlM2QzYTMyNzZkMWNjMzRJBIs2: 00:21:04.666 06:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:04.666 06:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:21:04.666 06:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWMxNTRiN2RlNzRlOTU3ZDAzYmE0OWZiY2FmNTA3Zjg3YjZkMWQ3NzVjMjBkOGIzLXKVCA==: 00:21:04.666 06:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzE2OGVlZDI4ZjIyMWY2MzBlM2QzYTMyNzZkMWNjMzRJBIs2: ]] 00:21:04.666 06:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzE2OGVlZDI4ZjIyMWY2MzBlM2QzYTMyNzZkMWNjMzRJBIs2: 00:21:04.666 06:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:21:04.666 06:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:04.666 06:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:04.666 06:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:21:04.666 06:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:21:04.666 06:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:04.666 06:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:04.666 06:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.666 06:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:04.666 06:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.666 06:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:04.666 06:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:04.666 06:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:04.666 06:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:04.666 06:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:04.666 06:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:04.666 06:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:04.666 06:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:04.666 06:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:04.666 06:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:04.666 06:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:04.666 06:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:21:04.666 06:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.666 06:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:05.234 nvme0n1 00:21:05.234 06:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.234 06:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:05.234 06:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:05.234 06:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.234 06:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:05.234 06:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.234 06:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:05.234 06:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:05.234 06:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.234 06:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:05.493 06:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.493 06:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:05.493 06:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:21:05.493 06:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:05.493 06:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:21:05.493 06:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:21:05.493 06:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:21:05.493 06:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YzlkOTVkNGI2ZjAyY2ExNmFlYzBkZWIyNGFlMzc5ZGYwOWE4N2ZlNThjNTExMmM2ZWVjNDlkYzQxZmRjNTQ1Nuw26sc=: 00:21:05.493 06:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:21:05.493 06:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:21:05.493 06:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:21:05.493 06:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YzlkOTVkNGI2ZjAyY2ExNmFlYzBkZWIyNGFlMzc5ZGYwOWE4N2ZlNThjNTExMmM2ZWVjNDlkYzQxZmRjNTQ1Nuw26sc=: 00:21:05.493 06:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:21:05.493 06:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:21:05.493 06:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:05.493 06:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:21:05.493 06:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:21:05.493 06:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:21:05.493 06:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:05.493 06:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:05.493 06:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.493 06:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:05.493 06:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:05.493 06:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:05.493 06:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:05.493 06:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:05.493 06:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:05.493 06:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:05.493 06:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:05.493 06:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:05.493 06:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:05.493 06:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:05.493 06:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:05.493 06:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:05.493 06:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:21:05.493 06:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:05.493 06:13:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:06.061 nvme0n1 00:21:06.061 06:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.061 06:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:06.061 06:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:06.061 06:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.061 06:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:06.061 06:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.061 06:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:06.061 06:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:06.061 06:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.061 06:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:06.061 06:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.061 06:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:21:06.061 06:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:21:06.062 06:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:06.062 06:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:21:06.062 06:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:06.062 06:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:06.062 06:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:06.062 06:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:21:06.062 06:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzFjNGU5NDFjNDM1MTllOTU3ZGQyYWYxMTRkNmI2Y2I50q+R: 00:21:06.062 06:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTliYjYwZGNkOTM0NzNjODIzMDliOTg2NmIzNzQwYjRkNjRmN2E4ZDBjNmVlNzQyMzFhOGUzNDA1ZDU1ZmE0NT3XilQ=: 00:21:06.062 06:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:06.062 06:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:21:06.062 06:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzFjNGU5NDFjNDM1MTllOTU3ZGQyYWYxMTRkNmI2Y2I50q+R: 00:21:06.062 06:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTliYjYwZGNkOTM0NzNjODIzMDliOTg2NmIzNzQwYjRkNjRmN2E4ZDBjNmVlNzQyMzFhOGUzNDA1ZDU1ZmE0NT3XilQ=: ]] 00:21:06.062 06:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTliYjYwZGNkOTM0NzNjODIzMDliOTg2NmIzNzQwYjRkNjRmN2E4ZDBjNmVlNzQyMzFhOGUzNDA1ZDU1ZmE0NT3XilQ=: 00:21:06.062 06:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:21:06.062 06:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:06.062 06:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:06.062 06:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:21:06.062 06:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:21:06.062 06:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:06.062 06:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:06.062 06:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.062 06:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:06.062 06:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.062 06:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:06.062 06:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:06.062 06:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:06.062 06:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:06.062 06:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:06.062 06:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:06.062 06:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:06.062 06:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:06.062 06:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:06.062 06:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:06.062 06:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:06.062 06:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:06.062 06:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.062 06:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:06.062 nvme0n1 00:21:06.062 06:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.062 06:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:06.062 06:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.062 06:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:06.062 06:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:06.062 06:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.062 06:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:06.062 06:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:06.062 06:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.062 06:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:06.062 06:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.062 06:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:06.062 06:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:21:06.062 06:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:06.062 06:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:06.062 06:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:06.062 06:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:21:06.062 06:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Yjk5ZGEwMDIwNTNiYjQ0NWJjMTAzYjU1OGM2MmJkMmFkYjU3MjIwZDhlZjUyZGI4qHTPhQ==: 00:21:06.062 06:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTdhZjZiYTJkNjI5OGJjOGVmY2UxMDk3NjczZWY4MDlhOTcwZjRmODQyMDA3Nzc1OaTp4Q==: 00:21:06.062 06:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:06.062 06:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:21:06.062 06:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Yjk5ZGEwMDIwNTNiYjQ0NWJjMTAzYjU1OGM2MmJkMmFkYjU3MjIwZDhlZjUyZGI4qHTPhQ==: 00:21:06.062 06:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTdhZjZiYTJkNjI5OGJjOGVmY2UxMDk3NjczZWY4MDlhOTcwZjRmODQyMDA3Nzc1OaTp4Q==: ]] 00:21:06.062 06:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTdhZjZiYTJkNjI5OGJjOGVmY2UxMDk3NjczZWY4MDlhOTcwZjRmODQyMDA3Nzc1OaTp4Q==: 00:21:06.062 06:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:21:06.062 06:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:06.062 06:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:06.062 06:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:21:06.062 06:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:21:06.062 06:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:06.062 06:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:06.062 06:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.062 06:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:06.062 06:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.062 06:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:06.062 06:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:06.062 06:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:06.062 06:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:06.062 06:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:06.062 06:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:06.062 06:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:06.062 06:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:06.062 06:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:06.062 06:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:06.062 06:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:06.062 06:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:06.062 06:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.062 06:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:06.321 nvme0n1 00:21:06.321 06:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.321 06:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:06.321 06:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:06.321 06:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.321 06:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:06.321 06:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.321 06:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:06.321 06:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:06.321 06:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.321 06:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:06.321 06:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.321 06:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:06.321 06:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:21:06.321 06:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:06.321 06:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:06.321 06:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:06.321 06:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:21:06.321 06:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTgxYmVmYzE3ZjFlNDE2NzRjMjdjMGVlOTcxMTQ4YTZQdiUD: 00:21:06.321 06:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWQwMGJiMzhiZjY3NzFiZjNmY2E3MzU3NDI3NGE4NmJnqIBh: 00:21:06.321 06:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:06.321 06:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:21:06.322 06:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTgxYmVmYzE3ZjFlNDE2NzRjMjdjMGVlOTcxMTQ4YTZQdiUD: 00:21:06.322 06:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWQwMGJiMzhiZjY3NzFiZjNmY2E3MzU3NDI3NGE4NmJnqIBh: ]] 00:21:06.322 06:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWQwMGJiMzhiZjY3NzFiZjNmY2E3MzU3NDI3NGE4NmJnqIBh: 00:21:06.322 06:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:21:06.322 06:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:06.322 06:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:06.322 06:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:21:06.322 06:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:21:06.322 06:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:06.322 06:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:06.322 06:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.322 06:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:06.322 06:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.322 06:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:06.322 06:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:06.322 06:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:06.322 06:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:06.322 06:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:06.322 06:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:06.322 06:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:06.322 06:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:06.322 06:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:06.322 06:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:06.322 06:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:06.322 06:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:06.322 06:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.322 06:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:06.322 nvme0n1 00:21:06.322 06:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.322 06:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:06.322 06:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:06.322 06:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.322 06:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:06.322 06:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.581 06:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:06.581 06:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:06.581 06:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.581 06:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:06.581 06:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.581 06:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:06.581 06:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:21:06.581 06:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:06.581 06:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:06.581 06:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:06.581 06:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:21:06.581 06:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWMxNTRiN2RlNzRlOTU3ZDAzYmE0OWZiY2FmNTA3Zjg3YjZkMWQ3NzVjMjBkOGIzLXKVCA==: 00:21:06.581 06:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzE2OGVlZDI4ZjIyMWY2MzBlM2QzYTMyNzZkMWNjMzRJBIs2: 00:21:06.581 06:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:06.581 06:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:21:06.581 06:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWMxNTRiN2RlNzRlOTU3ZDAzYmE0OWZiY2FmNTA3Zjg3YjZkMWQ3NzVjMjBkOGIzLXKVCA==: 00:21:06.581 06:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzE2OGVlZDI4ZjIyMWY2MzBlM2QzYTMyNzZkMWNjMzRJBIs2: ]] 00:21:06.581 06:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzE2OGVlZDI4ZjIyMWY2MzBlM2QzYTMyNzZkMWNjMzRJBIs2: 00:21:06.581 06:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:21:06.581 06:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:06.581 06:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:06.581 06:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:21:06.581 06:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:21:06.581 06:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:06.581 06:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:06.581 06:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.581 06:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:06.581 06:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.581 06:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:06.581 06:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:06.581 06:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:06.581 06:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:06.581 06:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:06.581 06:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:06.581 06:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:06.581 06:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:06.581 06:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:06.581 06:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:06.581 06:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:06.581 06:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:21:06.581 06:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.581 06:13:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:06.581 nvme0n1 00:21:06.581 06:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.581 06:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:06.581 06:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.581 06:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:06.581 06:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:06.581 06:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.581 06:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:06.581 06:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:06.581 06:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.581 06:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:06.581 06:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.581 06:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:06.581 06:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:21:06.581 06:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:06.581 06:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:06.581 06:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:06.581 06:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:21:06.581 06:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YzlkOTVkNGI2ZjAyY2ExNmFlYzBkZWIyNGFlMzc5ZGYwOWE4N2ZlNThjNTExMmM2ZWVjNDlkYzQxZmRjNTQ1Nuw26sc=: 00:21:06.581 06:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:21:06.581 06:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:06.581 06:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:21:06.581 06:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YzlkOTVkNGI2ZjAyY2ExNmFlYzBkZWIyNGFlMzc5ZGYwOWE4N2ZlNThjNTExMmM2ZWVjNDlkYzQxZmRjNTQ1Nuw26sc=: 00:21:06.581 06:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:21:06.581 06:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:21:06.581 06:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:06.581 06:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:06.581 06:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:21:06.581 06:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:21:06.581 06:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:06.581 06:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:06.581 06:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.581 06:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:06.581 06:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.581 06:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:06.581 06:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:06.581 06:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:06.581 06:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:06.581 06:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:06.581 06:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:06.581 06:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:06.581 06:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:06.581 06:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:06.581 06:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:06.581 06:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:06.581 06:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:21:06.581 06:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.581 06:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:06.841 nvme0n1 00:21:06.841 06:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.841 06:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:06.841 06:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:06.841 06:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.841 06:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:06.841 06:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.841 06:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:06.841 06:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:06.841 06:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.841 06:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:06.841 06:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.841 06:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:21:06.841 06:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:06.841 06:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:21:06.841 06:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:06.841 06:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:06.841 06:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:21:06.841 06:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:21:06.841 06:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzFjNGU5NDFjNDM1MTllOTU3ZGQyYWYxMTRkNmI2Y2I50q+R: 00:21:06.841 06:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTliYjYwZGNkOTM0NzNjODIzMDliOTg2NmIzNzQwYjRkNjRmN2E4ZDBjNmVlNzQyMzFhOGUzNDA1ZDU1ZmE0NT3XilQ=: 00:21:06.841 06:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:06.841 06:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:21:06.841 06:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzFjNGU5NDFjNDM1MTllOTU3ZGQyYWYxMTRkNmI2Y2I50q+R: 00:21:06.841 06:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTliYjYwZGNkOTM0NzNjODIzMDliOTg2NmIzNzQwYjRkNjRmN2E4ZDBjNmVlNzQyMzFhOGUzNDA1ZDU1ZmE0NT3XilQ=: ]] 00:21:06.841 06:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTliYjYwZGNkOTM0NzNjODIzMDliOTg2NmIzNzQwYjRkNjRmN2E4ZDBjNmVlNzQyMzFhOGUzNDA1ZDU1ZmE0NT3XilQ=: 00:21:06.841 06:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:21:06.841 06:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:06.841 06:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:06.841 06:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:21:06.841 06:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:21:06.841 06:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:06.841 06:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:06.841 06:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.841 06:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:06.841 06:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.841 06:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:06.841 06:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:06.841 06:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:06.841 06:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:06.841 06:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:06.841 06:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:06.841 06:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:06.841 06:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:06.841 06:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:06.841 06:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:06.841 06:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:06.841 06:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:06.841 06:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.841 06:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:07.100 nvme0n1 00:21:07.100 06:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.100 06:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:07.100 06:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:07.100 06:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.100 06:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:07.100 06:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.100 06:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:07.100 06:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:07.100 06:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.100 06:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:07.100 06:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.100 06:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:07.100 06:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:21:07.100 06:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:07.100 06:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:07.100 06:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:21:07.100 06:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:21:07.100 06:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Yjk5ZGEwMDIwNTNiYjQ0NWJjMTAzYjU1OGM2MmJkMmFkYjU3MjIwZDhlZjUyZGI4qHTPhQ==: 00:21:07.100 06:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTdhZjZiYTJkNjI5OGJjOGVmY2UxMDk3NjczZWY4MDlhOTcwZjRmODQyMDA3Nzc1OaTp4Q==: 00:21:07.100 06:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:07.100 06:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:21:07.100 06:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Yjk5ZGEwMDIwNTNiYjQ0NWJjMTAzYjU1OGM2MmJkMmFkYjU3MjIwZDhlZjUyZGI4qHTPhQ==: 00:21:07.100 06:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTdhZjZiYTJkNjI5OGJjOGVmY2UxMDk3NjczZWY4MDlhOTcwZjRmODQyMDA3Nzc1OaTp4Q==: ]] 00:21:07.100 06:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTdhZjZiYTJkNjI5OGJjOGVmY2UxMDk3NjczZWY4MDlhOTcwZjRmODQyMDA3Nzc1OaTp4Q==: 00:21:07.100 06:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:21:07.100 06:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:07.100 06:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:07.100 06:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:21:07.100 06:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:21:07.100 06:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:07.100 06:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:07.100 06:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.100 06:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:07.100 06:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.100 06:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:07.100 06:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:07.101 06:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:07.101 06:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:07.101 06:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:07.101 06:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:07.101 06:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:07.101 06:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:07.101 06:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:07.101 06:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:07.101 06:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:07.101 06:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:07.101 06:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.101 06:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:07.101 nvme0n1 00:21:07.101 06:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.101 06:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:07.101 06:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.101 06:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:07.101 06:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:07.360 06:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.360 06:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:07.360 06:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:07.360 06:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.360 06:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:07.360 06:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.360 06:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:07.360 06:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:21:07.360 06:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:07.360 06:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:07.360 06:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:21:07.360 06:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:21:07.360 06:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTgxYmVmYzE3ZjFlNDE2NzRjMjdjMGVlOTcxMTQ4YTZQdiUD: 00:21:07.360 06:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWQwMGJiMzhiZjY3NzFiZjNmY2E3MzU3NDI3NGE4NmJnqIBh: 00:21:07.360 06:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:07.360 06:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:21:07.360 06:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTgxYmVmYzE3ZjFlNDE2NzRjMjdjMGVlOTcxMTQ4YTZQdiUD: 00:21:07.360 06:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWQwMGJiMzhiZjY3NzFiZjNmY2E3MzU3NDI3NGE4NmJnqIBh: ]] 00:21:07.360 06:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWQwMGJiMzhiZjY3NzFiZjNmY2E3MzU3NDI3NGE4NmJnqIBh: 00:21:07.361 06:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:21:07.361 06:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:07.361 06:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:07.361 06:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:21:07.361 06:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:21:07.361 06:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:07.361 06:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:07.361 06:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.361 06:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:07.361 06:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.361 06:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:07.361 06:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:07.361 06:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:07.361 06:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:07.361 06:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:07.361 06:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:07.361 06:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:07.361 06:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:07.361 06:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:07.361 06:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:07.361 06:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:07.361 06:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:07.361 06:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.361 06:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:07.361 nvme0n1 00:21:07.361 06:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.361 06:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:07.361 06:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.361 06:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:07.361 06:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:07.361 06:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.361 06:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:07.361 06:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:07.361 06:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.361 06:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:07.361 06:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.361 06:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:07.620 06:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:21:07.620 06:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:07.620 06:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:07.620 06:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:21:07.620 06:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:21:07.620 06:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWMxNTRiN2RlNzRlOTU3ZDAzYmE0OWZiY2FmNTA3Zjg3YjZkMWQ3NzVjMjBkOGIzLXKVCA==: 00:21:07.620 06:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzE2OGVlZDI4ZjIyMWY2MzBlM2QzYTMyNzZkMWNjMzRJBIs2: 00:21:07.620 06:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:07.620 06:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:21:07.620 06:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWMxNTRiN2RlNzRlOTU3ZDAzYmE0OWZiY2FmNTA3Zjg3YjZkMWQ3NzVjMjBkOGIzLXKVCA==: 00:21:07.620 06:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzE2OGVlZDI4ZjIyMWY2MzBlM2QzYTMyNzZkMWNjMzRJBIs2: ]] 00:21:07.620 06:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzE2OGVlZDI4ZjIyMWY2MzBlM2QzYTMyNzZkMWNjMzRJBIs2: 00:21:07.620 06:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:21:07.620 06:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:07.620 06:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:07.620 06:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:21:07.620 06:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:21:07.620 06:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:07.620 06:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:07.620 06:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.620 06:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:07.620 06:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.620 06:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:07.620 06:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:07.620 06:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:07.620 06:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:07.620 06:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:07.620 06:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:07.621 06:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:07.621 06:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:07.621 06:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:07.621 06:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:07.621 06:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:07.621 06:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:21:07.621 06:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.621 06:13:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:07.621 nvme0n1 00:21:07.621 06:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.621 06:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:07.621 06:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:07.621 06:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.621 06:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:07.621 06:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.621 06:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:07.621 06:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:07.621 06:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.621 06:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:07.621 06:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.621 06:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:07.621 06:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:21:07.621 06:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:07.621 06:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:07.621 06:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:21:07.621 06:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:21:07.621 06:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YzlkOTVkNGI2ZjAyY2ExNmFlYzBkZWIyNGFlMzc5ZGYwOWE4N2ZlNThjNTExMmM2ZWVjNDlkYzQxZmRjNTQ1Nuw26sc=: 00:21:07.621 06:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:21:07.621 06:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:07.621 06:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:21:07.621 06:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YzlkOTVkNGI2ZjAyY2ExNmFlYzBkZWIyNGFlMzc5ZGYwOWE4N2ZlNThjNTExMmM2ZWVjNDlkYzQxZmRjNTQ1Nuw26sc=: 00:21:07.621 06:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:21:07.621 06:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:21:07.621 06:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:07.621 06:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:07.621 06:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:21:07.621 06:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:21:07.621 06:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:07.621 06:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:07.621 06:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.621 06:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:07.621 06:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.621 06:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:07.621 06:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:07.621 06:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:07.621 06:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:07.621 06:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:07.621 06:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:07.621 06:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:07.621 06:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:07.621 06:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:07.621 06:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:07.621 06:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:07.621 06:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:21:07.621 06:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.621 06:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:07.880 nvme0n1 00:21:07.880 06:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.880 06:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:07.880 06:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.880 06:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:07.880 06:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:07.880 06:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.880 06:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:07.880 06:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:07.880 06:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.880 06:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:07.880 06:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.880 06:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:21:07.880 06:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:07.880 06:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:21:07.880 06:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:07.880 06:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:07.880 06:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:21:07.880 06:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:21:07.880 06:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzFjNGU5NDFjNDM1MTllOTU3ZGQyYWYxMTRkNmI2Y2I50q+R: 00:21:07.880 06:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTliYjYwZGNkOTM0NzNjODIzMDliOTg2NmIzNzQwYjRkNjRmN2E4ZDBjNmVlNzQyMzFhOGUzNDA1ZDU1ZmE0NT3XilQ=: 00:21:07.880 06:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:07.880 06:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:21:07.880 06:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzFjNGU5NDFjNDM1MTllOTU3ZGQyYWYxMTRkNmI2Y2I50q+R: 00:21:07.880 06:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTliYjYwZGNkOTM0NzNjODIzMDliOTg2NmIzNzQwYjRkNjRmN2E4ZDBjNmVlNzQyMzFhOGUzNDA1ZDU1ZmE0NT3XilQ=: ]] 00:21:07.880 06:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTliYjYwZGNkOTM0NzNjODIzMDliOTg2NmIzNzQwYjRkNjRmN2E4ZDBjNmVlNzQyMzFhOGUzNDA1ZDU1ZmE0NT3XilQ=: 00:21:07.880 06:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:21:07.881 06:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:07.881 06:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:07.881 06:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:21:07.881 06:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:21:07.881 06:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:07.881 06:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:07.881 06:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.881 06:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:07.881 06:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:07.881 06:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:07.881 06:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:07.881 06:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:07.881 06:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:07.881 06:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:07.881 06:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:07.881 06:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:07.881 06:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:07.881 06:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:07.881 06:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:07.881 06:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:07.881 06:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:07.881 06:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:07.881 06:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:08.140 nvme0n1 00:21:08.140 06:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:08.140 06:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:08.140 06:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:08.140 06:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:08.140 06:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:08.140 06:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:08.140 06:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:08.140 06:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:08.140 06:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:08.141 06:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:08.141 06:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:08.141 06:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:08.141 06:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:21:08.141 06:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:08.141 06:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:08.141 06:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:21:08.141 06:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:21:08.141 06:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Yjk5ZGEwMDIwNTNiYjQ0NWJjMTAzYjU1OGM2MmJkMmFkYjU3MjIwZDhlZjUyZGI4qHTPhQ==: 00:21:08.141 06:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTdhZjZiYTJkNjI5OGJjOGVmY2UxMDk3NjczZWY4MDlhOTcwZjRmODQyMDA3Nzc1OaTp4Q==: 00:21:08.141 06:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:08.141 06:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:21:08.141 06:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Yjk5ZGEwMDIwNTNiYjQ0NWJjMTAzYjU1OGM2MmJkMmFkYjU3MjIwZDhlZjUyZGI4qHTPhQ==: 00:21:08.141 06:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTdhZjZiYTJkNjI5OGJjOGVmY2UxMDk3NjczZWY4MDlhOTcwZjRmODQyMDA3Nzc1OaTp4Q==: ]] 00:21:08.141 06:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTdhZjZiYTJkNjI5OGJjOGVmY2UxMDk3NjczZWY4MDlhOTcwZjRmODQyMDA3Nzc1OaTp4Q==: 00:21:08.141 06:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:21:08.141 06:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:08.141 06:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:08.141 06:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:21:08.141 06:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:21:08.141 06:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:08.141 06:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:08.141 06:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:08.141 06:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:08.141 06:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:08.141 06:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:08.141 06:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:08.141 06:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:08.141 06:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:08.141 06:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:08.141 06:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:08.141 06:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:08.141 06:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:08.141 06:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:08.141 06:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:08.141 06:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:08.141 06:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:08.141 06:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:08.141 06:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:08.400 nvme0n1 00:21:08.400 06:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:08.400 06:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:08.400 06:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:08.400 06:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:08.400 06:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:08.400 06:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:08.400 06:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:08.400 06:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:08.400 06:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:08.400 06:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:08.400 06:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:08.400 06:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:08.401 06:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:21:08.401 06:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:08.401 06:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:08.401 06:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:21:08.401 06:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:21:08.401 06:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTgxYmVmYzE3ZjFlNDE2NzRjMjdjMGVlOTcxMTQ4YTZQdiUD: 00:21:08.401 06:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWQwMGJiMzhiZjY3NzFiZjNmY2E3MzU3NDI3NGE4NmJnqIBh: 00:21:08.401 06:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:08.401 06:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:21:08.401 06:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTgxYmVmYzE3ZjFlNDE2NzRjMjdjMGVlOTcxMTQ4YTZQdiUD: 00:21:08.401 06:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWQwMGJiMzhiZjY3NzFiZjNmY2E3MzU3NDI3NGE4NmJnqIBh: ]] 00:21:08.401 06:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWQwMGJiMzhiZjY3NzFiZjNmY2E3MzU3NDI3NGE4NmJnqIBh: 00:21:08.401 06:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:21:08.401 06:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:08.401 06:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:08.401 06:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:21:08.401 06:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:21:08.401 06:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:08.401 06:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:08.401 06:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:08.401 06:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:08.401 06:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:08.401 06:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:08.401 06:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:08.401 06:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:08.401 06:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:08.401 06:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:08.401 06:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:08.401 06:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:08.401 06:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:08.401 06:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:08.401 06:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:08.401 06:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:08.401 06:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:08.401 06:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:08.401 06:13:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:08.660 nvme0n1 00:21:08.660 06:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:08.660 06:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:08.660 06:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:08.660 06:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:08.660 06:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:08.660 06:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:08.660 06:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:08.660 06:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:08.660 06:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:08.660 06:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:08.660 06:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:08.661 06:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:08.661 06:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:21:08.661 06:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:08.661 06:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:08.661 06:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:21:08.661 06:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:21:08.661 06:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWMxNTRiN2RlNzRlOTU3ZDAzYmE0OWZiY2FmNTA3Zjg3YjZkMWQ3NzVjMjBkOGIzLXKVCA==: 00:21:08.661 06:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzE2OGVlZDI4ZjIyMWY2MzBlM2QzYTMyNzZkMWNjMzRJBIs2: 00:21:08.661 06:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:08.661 06:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:21:08.661 06:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWMxNTRiN2RlNzRlOTU3ZDAzYmE0OWZiY2FmNTA3Zjg3YjZkMWQ3NzVjMjBkOGIzLXKVCA==: 00:21:08.661 06:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzE2OGVlZDI4ZjIyMWY2MzBlM2QzYTMyNzZkMWNjMzRJBIs2: ]] 00:21:08.661 06:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzE2OGVlZDI4ZjIyMWY2MzBlM2QzYTMyNzZkMWNjMzRJBIs2: 00:21:08.661 06:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:21:08.661 06:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:08.661 06:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:08.661 06:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:21:08.661 06:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:21:08.661 06:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:08.661 06:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:08.661 06:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:08.661 06:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:08.661 06:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:08.661 06:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:08.661 06:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:08.661 06:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:08.661 06:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:08.661 06:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:08.661 06:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:08.661 06:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:08.661 06:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:08.661 06:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:08.661 06:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:08.661 06:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:08.661 06:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:21:08.661 06:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:08.661 06:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:08.920 nvme0n1 00:21:08.920 06:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:08.920 06:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:08.920 06:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:08.920 06:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:08.920 06:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:08.920 06:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:08.920 06:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:08.920 06:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:08.920 06:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:08.920 06:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:08.920 06:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:08.920 06:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:08.920 06:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:21:08.920 06:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:08.920 06:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:08.920 06:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:21:08.920 06:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:21:08.920 06:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YzlkOTVkNGI2ZjAyY2ExNmFlYzBkZWIyNGFlMzc5ZGYwOWE4N2ZlNThjNTExMmM2ZWVjNDlkYzQxZmRjNTQ1Nuw26sc=: 00:21:08.920 06:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:21:08.920 06:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:08.920 06:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:21:08.921 06:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YzlkOTVkNGI2ZjAyY2ExNmFlYzBkZWIyNGFlMzc5ZGYwOWE4N2ZlNThjNTExMmM2ZWVjNDlkYzQxZmRjNTQ1Nuw26sc=: 00:21:08.921 06:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:21:08.921 06:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:21:08.921 06:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:08.921 06:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:08.921 06:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:21:08.921 06:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:21:08.921 06:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:08.921 06:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:08.921 06:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:08.921 06:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:08.921 06:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:08.921 06:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:08.921 06:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:08.921 06:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:08.921 06:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:08.921 06:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:08.921 06:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:08.921 06:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:08.921 06:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:08.921 06:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:08.921 06:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:08.921 06:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:08.921 06:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:21:08.921 06:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:08.921 06:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:09.180 nvme0n1 00:21:09.180 06:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:09.180 06:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:09.180 06:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:09.180 06:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:09.180 06:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:09.180 06:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:09.180 06:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:09.180 06:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:09.180 06:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:09.180 06:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:09.180 06:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:09.180 06:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:21:09.180 06:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:09.180 06:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:21:09.180 06:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:09.180 06:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:09.180 06:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:21:09.180 06:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:21:09.180 06:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzFjNGU5NDFjNDM1MTllOTU3ZGQyYWYxMTRkNmI2Y2I50q+R: 00:21:09.181 06:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTliYjYwZGNkOTM0NzNjODIzMDliOTg2NmIzNzQwYjRkNjRmN2E4ZDBjNmVlNzQyMzFhOGUzNDA1ZDU1ZmE0NT3XilQ=: 00:21:09.181 06:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:09.181 06:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:21:09.181 06:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzFjNGU5NDFjNDM1MTllOTU3ZGQyYWYxMTRkNmI2Y2I50q+R: 00:21:09.181 06:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTliYjYwZGNkOTM0NzNjODIzMDliOTg2NmIzNzQwYjRkNjRmN2E4ZDBjNmVlNzQyMzFhOGUzNDA1ZDU1ZmE0NT3XilQ=: ]] 00:21:09.181 06:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTliYjYwZGNkOTM0NzNjODIzMDliOTg2NmIzNzQwYjRkNjRmN2E4ZDBjNmVlNzQyMzFhOGUzNDA1ZDU1ZmE0NT3XilQ=: 00:21:09.181 06:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:21:09.181 06:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:09.181 06:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:09.181 06:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:21:09.181 06:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:21:09.181 06:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:09.181 06:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:09.181 06:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:09.181 06:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:09.181 06:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:09.181 06:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:09.181 06:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:09.181 06:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:09.181 06:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:09.181 06:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:09.181 06:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:09.181 06:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:09.181 06:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:09.181 06:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:09.181 06:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:09.181 06:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:09.181 06:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:09.181 06:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:09.181 06:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:09.440 nvme0n1 00:21:09.440 06:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:09.440 06:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:09.440 06:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:09.440 06:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:09.440 06:13:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:09.440 06:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:09.440 06:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:09.440 06:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:09.440 06:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:09.440 06:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:09.699 06:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:09.699 06:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:09.699 06:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:21:09.699 06:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:09.699 06:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:09.699 06:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:21:09.699 06:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:21:09.699 06:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Yjk5ZGEwMDIwNTNiYjQ0NWJjMTAzYjU1OGM2MmJkMmFkYjU3MjIwZDhlZjUyZGI4qHTPhQ==: 00:21:09.699 06:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTdhZjZiYTJkNjI5OGJjOGVmY2UxMDk3NjczZWY4MDlhOTcwZjRmODQyMDA3Nzc1OaTp4Q==: 00:21:09.699 06:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:09.699 06:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:21:09.699 06:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Yjk5ZGEwMDIwNTNiYjQ0NWJjMTAzYjU1OGM2MmJkMmFkYjU3MjIwZDhlZjUyZGI4qHTPhQ==: 00:21:09.699 06:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTdhZjZiYTJkNjI5OGJjOGVmY2UxMDk3NjczZWY4MDlhOTcwZjRmODQyMDA3Nzc1OaTp4Q==: ]] 00:21:09.699 06:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTdhZjZiYTJkNjI5OGJjOGVmY2UxMDk3NjczZWY4MDlhOTcwZjRmODQyMDA3Nzc1OaTp4Q==: 00:21:09.699 06:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:21:09.699 06:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:09.699 06:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:09.699 06:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:21:09.699 06:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:21:09.699 06:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:09.699 06:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:09.699 06:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:09.699 06:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:09.699 06:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:09.699 06:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:09.699 06:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:09.699 06:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:09.699 06:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:09.699 06:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:09.699 06:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:09.699 06:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:09.699 06:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:09.699 06:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:09.699 06:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:09.699 06:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:09.699 06:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:09.699 06:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:09.699 06:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:09.959 nvme0n1 00:21:09.959 06:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:09.959 06:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:09.959 06:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:09.959 06:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:09.959 06:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:09.959 06:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:09.959 06:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:09.959 06:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:09.959 06:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:09.959 06:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:09.959 06:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:09.959 06:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:09.959 06:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:21:09.959 06:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:09.959 06:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:09.959 06:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:21:09.959 06:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:21:09.959 06:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTgxYmVmYzE3ZjFlNDE2NzRjMjdjMGVlOTcxMTQ4YTZQdiUD: 00:21:09.959 06:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWQwMGJiMzhiZjY3NzFiZjNmY2E3MzU3NDI3NGE4NmJnqIBh: 00:21:09.959 06:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:09.959 06:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:21:09.959 06:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTgxYmVmYzE3ZjFlNDE2NzRjMjdjMGVlOTcxMTQ4YTZQdiUD: 00:21:09.959 06:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWQwMGJiMzhiZjY3NzFiZjNmY2E3MzU3NDI3NGE4NmJnqIBh: ]] 00:21:09.959 06:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWQwMGJiMzhiZjY3NzFiZjNmY2E3MzU3NDI3NGE4NmJnqIBh: 00:21:09.959 06:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:21:09.959 06:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:09.959 06:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:09.959 06:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:21:09.959 06:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:21:09.959 06:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:09.959 06:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:09.959 06:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:09.959 06:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:09.959 06:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:09.959 06:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:09.959 06:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:09.959 06:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:09.959 06:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:09.959 06:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:09.959 06:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:09.959 06:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:09.959 06:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:09.959 06:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:09.959 06:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:09.959 06:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:09.959 06:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:09.959 06:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:09.959 06:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:10.218 nvme0n1 00:21:10.218 06:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:10.218 06:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:10.218 06:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:10.218 06:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:10.218 06:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:10.218 06:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:10.218 06:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:10.218 06:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:10.218 06:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:10.218 06:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:10.477 06:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:10.477 06:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:10.477 06:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:21:10.477 06:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:10.477 06:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:10.477 06:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:21:10.477 06:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:21:10.477 06:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWMxNTRiN2RlNzRlOTU3ZDAzYmE0OWZiY2FmNTA3Zjg3YjZkMWQ3NzVjMjBkOGIzLXKVCA==: 00:21:10.477 06:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzE2OGVlZDI4ZjIyMWY2MzBlM2QzYTMyNzZkMWNjMzRJBIs2: 00:21:10.477 06:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:10.477 06:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:21:10.477 06:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWMxNTRiN2RlNzRlOTU3ZDAzYmE0OWZiY2FmNTA3Zjg3YjZkMWQ3NzVjMjBkOGIzLXKVCA==: 00:21:10.477 06:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzE2OGVlZDI4ZjIyMWY2MzBlM2QzYTMyNzZkMWNjMzRJBIs2: ]] 00:21:10.477 06:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzE2OGVlZDI4ZjIyMWY2MzBlM2QzYTMyNzZkMWNjMzRJBIs2: 00:21:10.477 06:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:21:10.477 06:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:10.477 06:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:10.477 06:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:21:10.477 06:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:21:10.477 06:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:10.477 06:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:10.477 06:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:10.477 06:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:10.477 06:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:10.477 06:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:10.477 06:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:10.477 06:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:10.477 06:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:10.477 06:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:10.477 06:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:10.477 06:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:10.477 06:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:10.477 06:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:10.477 06:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:10.477 06:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:10.477 06:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:21:10.477 06:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:10.477 06:13:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:10.736 nvme0n1 00:21:10.736 06:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:10.736 06:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:10.736 06:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:10.736 06:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:10.736 06:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:10.736 06:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:10.736 06:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:10.736 06:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:10.736 06:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:10.736 06:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:10.736 06:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:10.736 06:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:10.736 06:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:21:10.736 06:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:10.736 06:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:10.736 06:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:21:10.736 06:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:21:10.736 06:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YzlkOTVkNGI2ZjAyY2ExNmFlYzBkZWIyNGFlMzc5ZGYwOWE4N2ZlNThjNTExMmM2ZWVjNDlkYzQxZmRjNTQ1Nuw26sc=: 00:21:10.736 06:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:21:10.736 06:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:10.736 06:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:21:10.736 06:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YzlkOTVkNGI2ZjAyY2ExNmFlYzBkZWIyNGFlMzc5ZGYwOWE4N2ZlNThjNTExMmM2ZWVjNDlkYzQxZmRjNTQ1Nuw26sc=: 00:21:10.736 06:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:21:10.736 06:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:21:10.736 06:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:10.736 06:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:10.736 06:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:21:10.736 06:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:21:10.736 06:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:10.736 06:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:10.736 06:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:10.736 06:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:10.736 06:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:10.736 06:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:10.737 06:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:10.737 06:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:10.737 06:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:10.737 06:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:10.737 06:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:10.737 06:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:10.737 06:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:10.737 06:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:10.737 06:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:10.737 06:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:10.737 06:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:21:10.737 06:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:10.737 06:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:10.995 nvme0n1 00:21:10.995 06:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:10.995 06:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:10.996 06:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:10.996 06:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:10.996 06:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:10.996 06:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:10.996 06:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:10.996 06:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:10.996 06:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:10.996 06:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:10.996 06:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.255 06:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:21:11.255 06:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:11.255 06:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:21:11.255 06:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:11.255 06:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:11.255 06:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:21:11.255 06:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:21:11.255 06:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzFjNGU5NDFjNDM1MTllOTU3ZGQyYWYxMTRkNmI2Y2I50q+R: 00:21:11.255 06:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZTliYjYwZGNkOTM0NzNjODIzMDliOTg2NmIzNzQwYjRkNjRmN2E4ZDBjNmVlNzQyMzFhOGUzNDA1ZDU1ZmE0NT3XilQ=: 00:21:11.255 06:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:11.255 06:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:21:11.255 06:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzFjNGU5NDFjNDM1MTllOTU3ZGQyYWYxMTRkNmI2Y2I50q+R: 00:21:11.255 06:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZTliYjYwZGNkOTM0NzNjODIzMDliOTg2NmIzNzQwYjRkNjRmN2E4ZDBjNmVlNzQyMzFhOGUzNDA1ZDU1ZmE0NT3XilQ=: ]] 00:21:11.255 06:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZTliYjYwZGNkOTM0NzNjODIzMDliOTg2NmIzNzQwYjRkNjRmN2E4ZDBjNmVlNzQyMzFhOGUzNDA1ZDU1ZmE0NT3XilQ=: 00:21:11.255 06:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:21:11.255 06:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:11.255 06:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:11.255 06:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:21:11.255 06:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:21:11.255 06:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:11.255 06:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:11.255 06:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.255 06:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:11.255 06:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.255 06:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:11.255 06:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:11.255 06:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:11.255 06:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:11.255 06:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:11.255 06:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:11.255 06:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:11.255 06:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:11.255 06:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:11.255 06:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:11.255 06:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:11.255 06:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:11.255 06:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.255 06:13:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:11.823 nvme0n1 00:21:11.823 06:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.823 06:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:11.823 06:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:11.823 06:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.823 06:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:11.823 06:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.823 06:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:11.823 06:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:11.823 06:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.823 06:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:11.823 06:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.823 06:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:11.823 06:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:21:11.823 06:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:11.823 06:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:11.823 06:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:21:11.823 06:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:21:11.823 06:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Yjk5ZGEwMDIwNTNiYjQ0NWJjMTAzYjU1OGM2MmJkMmFkYjU3MjIwZDhlZjUyZGI4qHTPhQ==: 00:21:11.823 06:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTdhZjZiYTJkNjI5OGJjOGVmY2UxMDk3NjczZWY4MDlhOTcwZjRmODQyMDA3Nzc1OaTp4Q==: 00:21:11.823 06:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:11.823 06:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:21:11.823 06:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Yjk5ZGEwMDIwNTNiYjQ0NWJjMTAzYjU1OGM2MmJkMmFkYjU3MjIwZDhlZjUyZGI4qHTPhQ==: 00:21:11.823 06:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTdhZjZiYTJkNjI5OGJjOGVmY2UxMDk3NjczZWY4MDlhOTcwZjRmODQyMDA3Nzc1OaTp4Q==: ]] 00:21:11.823 06:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTdhZjZiYTJkNjI5OGJjOGVmY2UxMDk3NjczZWY4MDlhOTcwZjRmODQyMDA3Nzc1OaTp4Q==: 00:21:11.823 06:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:21:11.823 06:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:11.823 06:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:11.823 06:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:21:11.823 06:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:21:11.823 06:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:11.823 06:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:11.823 06:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.823 06:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:11.823 06:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:11.823 06:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:11.823 06:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:11.823 06:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:11.823 06:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:11.823 06:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:11.823 06:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:11.823 06:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:11.823 06:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:11.823 06:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:11.823 06:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:11.823 06:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:11.824 06:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:11.824 06:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:11.824 06:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:12.392 nvme0n1 00:21:12.392 06:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:12.392 06:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:12.392 06:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:12.392 06:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:12.392 06:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:12.392 06:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:12.392 06:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:12.392 06:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:12.392 06:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:12.392 06:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:12.392 06:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:12.392 06:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:12.392 06:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:21:12.392 06:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:12.392 06:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:12.392 06:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:21:12.392 06:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:21:12.392 06:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTgxYmVmYzE3ZjFlNDE2NzRjMjdjMGVlOTcxMTQ4YTZQdiUD: 00:21:12.392 06:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWQwMGJiMzhiZjY3NzFiZjNmY2E3MzU3NDI3NGE4NmJnqIBh: 00:21:12.392 06:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:12.392 06:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:21:12.392 06:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTgxYmVmYzE3ZjFlNDE2NzRjMjdjMGVlOTcxMTQ4YTZQdiUD: 00:21:12.392 06:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWQwMGJiMzhiZjY3NzFiZjNmY2E3MzU3NDI3NGE4NmJnqIBh: ]] 00:21:12.392 06:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWQwMGJiMzhiZjY3NzFiZjNmY2E3MzU3NDI3NGE4NmJnqIBh: 00:21:12.392 06:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:21:12.392 06:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:12.392 06:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:12.392 06:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:21:12.392 06:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:21:12.392 06:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:12.392 06:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:12.392 06:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:12.392 06:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:12.392 06:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:12.392 06:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:12.392 06:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:12.392 06:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:12.392 06:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:12.392 06:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:12.392 06:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:12.392 06:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:12.392 06:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:12.392 06:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:12.392 06:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:12.392 06:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:12.392 06:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:12.392 06:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:12.392 06:13:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:12.960 nvme0n1 00:21:12.960 06:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:12.960 06:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:12.960 06:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:12.960 06:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:12.960 06:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:12.960 06:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:12.960 06:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:12.960 06:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:12.960 06:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:12.960 06:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:12.960 06:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:12.960 06:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:12.960 06:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:21:12.960 06:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:12.960 06:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:12.960 06:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:21:12.960 06:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:21:12.960 06:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWMxNTRiN2RlNzRlOTU3ZDAzYmE0OWZiY2FmNTA3Zjg3YjZkMWQ3NzVjMjBkOGIzLXKVCA==: 00:21:12.960 06:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NzE2OGVlZDI4ZjIyMWY2MzBlM2QzYTMyNzZkMWNjMzRJBIs2: 00:21:12.960 06:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:12.960 06:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:21:12.960 06:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWMxNTRiN2RlNzRlOTU3ZDAzYmE0OWZiY2FmNTA3Zjg3YjZkMWQ3NzVjMjBkOGIzLXKVCA==: 00:21:12.960 06:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NzE2OGVlZDI4ZjIyMWY2MzBlM2QzYTMyNzZkMWNjMzRJBIs2: ]] 00:21:12.960 06:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NzE2OGVlZDI4ZjIyMWY2MzBlM2QzYTMyNzZkMWNjMzRJBIs2: 00:21:12.960 06:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:21:12.960 06:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:12.960 06:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:12.960 06:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:21:12.960 06:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:21:12.960 06:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:12.960 06:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:12.960 06:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:12.960 06:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:12.960 06:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:12.960 06:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:12.960 06:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:12.960 06:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:12.960 06:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:12.960 06:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:12.960 06:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:12.960 06:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:12.960 06:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:12.960 06:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:12.960 06:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:12.960 06:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:12.960 06:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:21:12.960 06:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:12.960 06:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:13.534 nvme0n1 00:21:13.534 06:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:13.534 06:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:13.534 06:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:13.534 06:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:13.534 06:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:13.534 06:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:13.534 06:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:13.534 06:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:13.534 06:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:13.534 06:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:13.534 06:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:13.534 06:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:21:13.534 06:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:21:13.534 06:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:13.534 06:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:21:13.534 06:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:21:13.534 06:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:21:13.534 06:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YzlkOTVkNGI2ZjAyY2ExNmFlYzBkZWIyNGFlMzc5ZGYwOWE4N2ZlNThjNTExMmM2ZWVjNDlkYzQxZmRjNTQ1Nuw26sc=: 00:21:13.534 06:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:21:13.534 06:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:21:13.534 06:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:21:13.534 06:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YzlkOTVkNGI2ZjAyY2ExNmFlYzBkZWIyNGFlMzc5ZGYwOWE4N2ZlNThjNTExMmM2ZWVjNDlkYzQxZmRjNTQ1Nuw26sc=: 00:21:13.534 06:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:21:13.534 06:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:21:13.534 06:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:21:13.534 06:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:21:13.534 06:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:21:13.534 06:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:21:13.534 06:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:21:13.534 06:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:13.534 06:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:13.534 06:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:13.534 06:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:13.534 06:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:21:13.534 06:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:13.534 06:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:13.534 06:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:13.534 06:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:13.534 06:13:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:13.534 06:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:13.534 06:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:13.534 06:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:13.534 06:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:13.534 06:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:13.534 06:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:21:13.534 06:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:13.534 06:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:14.101 nvme0n1 00:21:14.101 06:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:14.101 06:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:21:14.101 06:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:21:14.101 06:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:14.101 06:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:14.101 06:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:14.101 06:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:14.101 06:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:14.101 06:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:14.101 06:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:14.101 06:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:14.101 06:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:21:14.101 06:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:14.101 06:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:14.101 06:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:14.101 06:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:21:14.101 06:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Yjk5ZGEwMDIwNTNiYjQ0NWJjMTAzYjU1OGM2MmJkMmFkYjU3MjIwZDhlZjUyZGI4qHTPhQ==: 00:21:14.101 06:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTdhZjZiYTJkNjI5OGJjOGVmY2UxMDk3NjczZWY4MDlhOTcwZjRmODQyMDA3Nzc1OaTp4Q==: 00:21:14.101 06:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:14.101 06:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:21:14.101 06:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Yjk5ZGEwMDIwNTNiYjQ0NWJjMTAzYjU1OGM2MmJkMmFkYjU3MjIwZDhlZjUyZGI4qHTPhQ==: 00:21:14.101 06:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTdhZjZiYTJkNjI5OGJjOGVmY2UxMDk3NjczZWY4MDlhOTcwZjRmODQyMDA3Nzc1OaTp4Q==: ]] 00:21:14.101 06:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTdhZjZiYTJkNjI5OGJjOGVmY2UxMDk3NjczZWY4MDlhOTcwZjRmODQyMDA3Nzc1OaTp4Q==: 00:21:14.101 06:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:14.101 06:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:14.101 06:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:14.101 06:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:14.101 06:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:21:14.101 06:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:14.101 06:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:14.101 06:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:14.101 06:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:14.101 06:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:14.101 06:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:14.101 06:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:14.101 06:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:14.101 06:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:14.101 06:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:14.101 06:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:21:14.101 06:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:21:14.101 06:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:21:14.101 06:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:21:14.101 06:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:14.101 06:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:21:14.101 06:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:14.101 06:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:21:14.101 06:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:14.101 06:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:14.101 request: 00:21:14.101 { 00:21:14.101 "name": "nvme0", 00:21:14.101 "trtype": "tcp", 00:21:14.101 "traddr": "10.0.0.1", 00:21:14.101 "adrfam": "ipv4", 00:21:14.101 "trsvcid": "4420", 00:21:14.101 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:21:14.101 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:21:14.101 "prchk_reftag": false, 00:21:14.101 "prchk_guard": false, 00:21:14.101 "hdgst": false, 00:21:14.101 "ddgst": false, 00:21:14.101 "allow_unrecognized_csi": false, 00:21:14.101 "method": "bdev_nvme_attach_controller", 00:21:14.101 "req_id": 1 00:21:14.101 } 00:21:14.101 Got JSON-RPC error response 00:21:14.101 response: 00:21:14.101 { 00:21:14.101 "code": -5, 00:21:14.101 "message": "Input/output error" 00:21:14.101 } 00:21:14.101 06:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:21:14.101 06:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:21:14.101 06:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:14.101 06:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:14.101 06:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:14.101 06:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:21:14.101 06:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:14.101 06:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:14.101 06:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:21:14.102 06:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:14.102 06:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:21:14.102 06:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:21:14.102 06:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:14.102 06:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:14.102 06:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:14.102 06:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:14.102 06:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:14.102 06:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:14.102 06:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:14.102 06:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:14.102 06:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:14.102 06:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:14.102 06:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:21:14.102 06:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:21:14.102 06:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:21:14.102 06:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:21:14.102 06:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:14.102 06:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:21:14.102 06:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:14.102 06:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:21:14.102 06:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:14.102 06:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:14.360 request: 00:21:14.360 { 00:21:14.360 "name": "nvme0", 00:21:14.360 "trtype": "tcp", 00:21:14.360 "traddr": "10.0.0.1", 00:21:14.360 "adrfam": "ipv4", 00:21:14.360 "trsvcid": "4420", 00:21:14.360 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:21:14.360 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:21:14.360 "prchk_reftag": false, 00:21:14.360 "prchk_guard": false, 00:21:14.360 "hdgst": false, 00:21:14.360 "ddgst": false, 00:21:14.360 "dhchap_key": "key2", 00:21:14.360 "allow_unrecognized_csi": false, 00:21:14.360 "method": "bdev_nvme_attach_controller", 00:21:14.360 "req_id": 1 00:21:14.360 } 00:21:14.360 Got JSON-RPC error response 00:21:14.360 response: 00:21:14.360 { 00:21:14.360 "code": -5, 00:21:14.360 "message": "Input/output error" 00:21:14.360 } 00:21:14.360 06:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:21:14.360 06:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:21:14.360 06:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:14.360 06:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:14.360 06:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:14.360 06:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:21:14.360 06:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:21:14.360 06:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:14.360 06:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:14.360 06:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:14.360 06:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:21:14.360 06:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:21:14.360 06:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:14.360 06:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:14.360 06:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:14.360 06:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:14.360 06:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:14.360 06:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:14.360 06:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:14.360 06:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:14.360 06:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:14.360 06:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:14.361 06:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:14.361 06:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:21:14.361 06:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:14.361 06:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:21:14.361 06:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:14.361 06:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:21:14.361 06:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:14.361 06:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:14.361 06:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:14.361 06:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:14.361 request: 00:21:14.361 { 00:21:14.361 "name": "nvme0", 00:21:14.361 "trtype": "tcp", 00:21:14.361 "traddr": "10.0.0.1", 00:21:14.361 "adrfam": "ipv4", 00:21:14.361 "trsvcid": "4420", 00:21:14.361 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:21:14.361 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:21:14.361 "prchk_reftag": false, 00:21:14.361 "prchk_guard": false, 00:21:14.361 "hdgst": false, 00:21:14.361 "ddgst": false, 00:21:14.361 "dhchap_key": "key1", 00:21:14.361 "dhchap_ctrlr_key": "ckey2", 00:21:14.361 "allow_unrecognized_csi": false, 00:21:14.361 "method": "bdev_nvme_attach_controller", 00:21:14.361 "req_id": 1 00:21:14.361 } 00:21:14.361 Got JSON-RPC error response 00:21:14.361 response: 00:21:14.361 { 00:21:14.361 "code": -5, 00:21:14.361 "message": "Input/output error" 00:21:14.361 } 00:21:14.361 06:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:21:14.361 06:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:21:14.361 06:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:14.361 06:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:14.361 06:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:14.361 06:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:21:14.361 06:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:14.361 06:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:14.361 06:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:14.361 06:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:14.361 06:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:14.361 06:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:14.361 06:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:14.361 06:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:14.361 06:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:14.361 06:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:14.361 06:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:21:14.361 06:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:14.361 06:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:14.361 nvme0n1 00:21:14.361 06:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:14.361 06:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:21:14.361 06:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:14.361 06:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:14.361 06:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:14.361 06:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:21:14.361 06:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTgxYmVmYzE3ZjFlNDE2NzRjMjdjMGVlOTcxMTQ4YTZQdiUD: 00:21:14.361 06:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWQwMGJiMzhiZjY3NzFiZjNmY2E3MzU3NDI3NGE4NmJnqIBh: 00:21:14.361 06:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:14.361 06:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:21:14.361 06:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTgxYmVmYzE3ZjFlNDE2NzRjMjdjMGVlOTcxMTQ4YTZQdiUD: 00:21:14.361 06:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWQwMGJiMzhiZjY3NzFiZjNmY2E3MzU3NDI3NGE4NmJnqIBh: ]] 00:21:14.361 06:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWQwMGJiMzhiZjY3NzFiZjNmY2E3MzU3NDI3NGE4NmJnqIBh: 00:21:14.361 06:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:14.361 06:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:14.361 06:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:14.361 06:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:14.361 06:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:21:14.361 06:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:21:14.361 06:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:14.361 06:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:14.361 06:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:14.635 06:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:14.635 06:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:14.635 06:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:21:14.635 06:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:14.635 06:13:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:21:14.635 06:13:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:14.635 06:13:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:21:14.635 06:13:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:14.635 06:13:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:14.635 06:13:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:14.635 06:13:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:14.635 request: 00:21:14.635 { 00:21:14.635 "name": "nvme0", 00:21:14.635 "dhchap_key": "key1", 00:21:14.635 "dhchap_ctrlr_key": "ckey2", 00:21:14.635 "method": "bdev_nvme_set_keys", 00:21:14.635 "req_id": 1 00:21:14.635 } 00:21:14.635 Got JSON-RPC error response 00:21:14.635 response: 00:21:14.635 { 00:21:14.635 "code": -13, 00:21:14.635 "message": "Permission denied" 00:21:14.635 } 00:21:14.635 06:13:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:21:14.635 06:13:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:21:14.635 06:13:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:14.635 06:13:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:14.635 06:13:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:14.635 06:13:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:21:14.635 06:13:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:21:14.635 06:13:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:14.635 06:13:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:14.635 06:13:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:14.635 06:13:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:21:14.635 06:13:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:21:15.584 06:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:21:15.584 06:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:21:15.584 06:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:15.584 06:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:15.584 06:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:15.584 06:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:21:15.584 06:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:21:15.584 06:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:15.584 06:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:15.584 06:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:15.584 06:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:21:15.584 06:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Yjk5ZGEwMDIwNTNiYjQ0NWJjMTAzYjU1OGM2MmJkMmFkYjU3MjIwZDhlZjUyZGI4qHTPhQ==: 00:21:15.584 06:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTdhZjZiYTJkNjI5OGJjOGVmY2UxMDk3NjczZWY4MDlhOTcwZjRmODQyMDA3Nzc1OaTp4Q==: 00:21:15.584 06:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:15.584 06:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:21:15.584 06:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Yjk5ZGEwMDIwNTNiYjQ0NWJjMTAzYjU1OGM2MmJkMmFkYjU3MjIwZDhlZjUyZGI4qHTPhQ==: 00:21:15.584 06:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTdhZjZiYTJkNjI5OGJjOGVmY2UxMDk3NjczZWY4MDlhOTcwZjRmODQyMDA3Nzc1OaTp4Q==: ]] 00:21:15.584 06:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTdhZjZiYTJkNjI5OGJjOGVmY2UxMDk3NjczZWY4MDlhOTcwZjRmODQyMDA3Nzc1OaTp4Q==: 00:21:15.584 06:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:21:15.585 06:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@765 -- # local ip 00:21:15.585 06:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # ip_candidates=() 00:21:15.585 06:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@766 -- # local -A ip_candidates 00:21:15.585 06:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:21:15.585 06:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:21:15.585 06:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:21:15.585 06:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:21:15.585 06:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:21:15.585 06:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:21:15.585 06:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:21:15.585 06:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:21:15.585 06:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:15.585 06:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:15.843 nvme0n1 00:21:15.843 06:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:15.843 06:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:21:15.843 06:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:21:15.843 06:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:21:15.843 06:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:21:15.843 06:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:21:15.843 06:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTgxYmVmYzE3ZjFlNDE2NzRjMjdjMGVlOTcxMTQ4YTZQdiUD: 00:21:15.843 06:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWQwMGJiMzhiZjY3NzFiZjNmY2E3MzU3NDI3NGE4NmJnqIBh: 00:21:15.843 06:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:21:15.843 06:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:21:15.843 06:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTgxYmVmYzE3ZjFlNDE2NzRjMjdjMGVlOTcxMTQ4YTZQdiUD: 00:21:15.843 06:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWQwMGJiMzhiZjY3NzFiZjNmY2E3MzU3NDI3NGE4NmJnqIBh: ]] 00:21:15.843 06:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWQwMGJiMzhiZjY3NzFiZjNmY2E3MzU3NDI3NGE4NmJnqIBh: 00:21:15.843 06:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:21:15.843 06:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:21:15.843 06:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:21:15.843 06:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:21:15.843 06:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:15.843 06:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:21:15.844 06:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:15.844 06:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:21:15.844 06:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:15.844 06:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:15.844 request: 00:21:15.844 { 00:21:15.844 "name": "nvme0", 00:21:15.844 "dhchap_key": "key2", 00:21:15.844 "dhchap_ctrlr_key": "ckey1", 00:21:15.844 "method": "bdev_nvme_set_keys", 00:21:15.844 "req_id": 1 00:21:15.844 } 00:21:15.844 Got JSON-RPC error response 00:21:15.844 response: 00:21:15.844 { 00:21:15.844 "code": -13, 00:21:15.844 "message": "Permission denied" 00:21:15.844 } 00:21:15.844 06:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:21:15.844 06:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:21:15.844 06:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:15.844 06:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:15.844 06:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:15.844 06:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:21:15.844 06:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:21:15.844 06:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:15.844 06:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:15.844 06:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:15.844 06:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:21:15.844 06:13:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:21:16.779 06:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:21:16.779 06:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:21:16.779 06:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:16.779 06:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:16.779 06:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:16.779 06:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:21:16.779 06:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:21:16.779 06:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:21:16.779 06:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:21:16.779 06:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # nvmfcleanup 00:21:16.779 06:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:21:16.779 06:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:16.779 06:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:21:16.779 06:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:16.779 06:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:17.038 rmmod nvme_tcp 00:21:17.038 rmmod nvme_fabrics 00:21:17.038 06:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:17.038 06:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:21:17.038 06:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:21:17.038 06:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@513 -- # '[' -n 92581 ']' 00:21:17.038 06:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@514 -- # killprocess 92581 00:21:17.038 06:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@950 -- # '[' -z 92581 ']' 00:21:17.038 06:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # kill -0 92581 00:21:17.038 06:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@955 -- # uname 00:21:17.038 06:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:17.038 06:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 92581 00:21:17.038 06:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:17.038 06:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:17.038 killing process with pid 92581 00:21:17.038 06:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@968 -- # echo 'killing process with pid 92581' 00:21:17.038 06:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@969 -- # kill 92581 00:21:17.038 06:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@974 -- # wait 92581 00:21:17.038 06:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:21:17.038 06:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:21:17.038 06:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:21:17.038 06:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:21:17.038 06:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@787 -- # iptables-save 00:21:17.038 06:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:21:17.038 06:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@787 -- # iptables-restore 00:21:17.038 06:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:17.038 06:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:21:17.038 06:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:21:17.038 06:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:21:17.038 06:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:21:17.038 06:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:21:17.297 06:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:21:17.297 06:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:21:17.297 06:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:21:17.297 06:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:21:17.297 06:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:21:17.297 06:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:21:17.297 06:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:21:17.297 06:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:17.297 06:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:17.297 06:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@246 -- # remove_spdk_ns 00:21:17.297 06:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:17.297 06:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:17.297 06:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:17.297 06:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@300 -- # return 0 00:21:17.297 06:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:21:17.297 06:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:21:17.297 06:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:21:17.297 06:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:21:17.297 06:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@710 -- # echo 0 00:21:17.297 06:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:21:17.297 06:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@713 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:21:17.297 06:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:21:17.297 06:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@715 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:21:17.297 06:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # modules=(/sys/module/nvmet/holders/*) 00:21:17.297 06:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # modprobe -r nvmet_tcp nvmet 00:21:17.297 06:13:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@722 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:21:18.235 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:18.235 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:21:18.235 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:21:18.235 06:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.gyS /tmp/spdk.key-null.pif /tmp/spdk.key-sha256.kKL /tmp/spdk.key-sha384.fjH /tmp/spdk.key-sha512.DvX /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log 00:21:18.235 06:13:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:21:18.803 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:21:18.803 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:21:18.803 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:21:18.803 00:21:18.803 real 0m35.078s 00:21:18.803 user 0m32.385s 00:21:18.803 sys 0m3.830s 00:21:18.803 06:13:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:18.803 06:13:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:21:18.803 ************************************ 00:21:18.803 END TEST nvmf_auth_host 00:21:18.803 ************************************ 00:21:18.803 06:13:44 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:21:18.803 06:13:44 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:21:18.803 06:13:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:21:18.803 06:13:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:18.803 06:13:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:18.804 ************************************ 00:21:18.804 START TEST nvmf_digest 00:21:18.804 ************************************ 00:21:18.804 06:13:44 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:21:18.804 * Looking for test storage... 00:21:18.804 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:21:18.804 06:13:44 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:21:18.804 06:13:44 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:21:18.804 06:13:44 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1681 -- # lcov --version 00:21:19.064 06:13:44 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:21:19.064 06:13:44 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:19.064 06:13:44 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:19.064 06:13:44 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:19.064 06:13:44 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:21:19.064 06:13:44 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:21:19.064 06:13:44 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:21:19.064 06:13:44 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:21:19.064 06:13:44 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:21:19.064 06:13:44 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:21:19.064 06:13:44 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:21:19.064 06:13:44 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:19.064 06:13:44 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:21:19.064 06:13:44 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:21:19.064 06:13:44 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:19.064 06:13:44 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:19.064 06:13:44 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:21:19.064 06:13:44 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:21:19.064 06:13:44 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:19.064 06:13:44 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:21:19.064 06:13:44 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:21:19.064 06:13:44 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:21:19.064 06:13:44 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:21:19.064 06:13:44 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:19.064 06:13:44 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:21:19.064 06:13:44 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:21:19.064 06:13:44 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:19.064 06:13:44 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:19.064 06:13:44 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:21:19.064 06:13:44 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:19.064 06:13:44 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:21:19.064 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:19.064 --rc genhtml_branch_coverage=1 00:21:19.064 --rc genhtml_function_coverage=1 00:21:19.064 --rc genhtml_legend=1 00:21:19.064 --rc geninfo_all_blocks=1 00:21:19.064 --rc geninfo_unexecuted_blocks=1 00:21:19.064 00:21:19.064 ' 00:21:19.064 06:13:44 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:21:19.064 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:19.064 --rc genhtml_branch_coverage=1 00:21:19.064 --rc genhtml_function_coverage=1 00:21:19.064 --rc genhtml_legend=1 00:21:19.064 --rc geninfo_all_blocks=1 00:21:19.064 --rc geninfo_unexecuted_blocks=1 00:21:19.064 00:21:19.064 ' 00:21:19.064 06:13:44 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:21:19.064 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:19.064 --rc genhtml_branch_coverage=1 00:21:19.064 --rc genhtml_function_coverage=1 00:21:19.064 --rc genhtml_legend=1 00:21:19.064 --rc geninfo_all_blocks=1 00:21:19.064 --rc geninfo_unexecuted_blocks=1 00:21:19.064 00:21:19.064 ' 00:21:19.064 06:13:44 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:21:19.064 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:19.064 --rc genhtml_branch_coverage=1 00:21:19.064 --rc genhtml_function_coverage=1 00:21:19.064 --rc genhtml_legend=1 00:21:19.064 --rc geninfo_all_blocks=1 00:21:19.064 --rc geninfo_unexecuted_blocks=1 00:21:19.064 00:21:19.064 ' 00:21:19.064 06:13:44 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:19.064 06:13:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:21:19.064 06:13:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:19.064 06:13:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:19.064 06:13:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:19.064 06:13:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:19.064 06:13:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:19.064 06:13:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:19.064 06:13:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:19.064 06:13:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:19.064 06:13:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:19.064 06:13:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:19.064 06:13:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 00:21:19.064 06:13:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=a979a798-a221-4879-b3c4-5aaa753fde06 00:21:19.064 06:13:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:19.064 06:13:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:19.064 06:13:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:19.064 06:13:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:19.064 06:13:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:19.064 06:13:44 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:21:19.064 06:13:44 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:19.064 06:13:44 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:19.064 06:13:44 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:19.064 06:13:44 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:19.065 06:13:44 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:19.065 06:13:44 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:19.065 06:13:44 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:21:19.065 06:13:44 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:19.065 06:13:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:21:19.065 06:13:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:19.065 06:13:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:19.065 06:13:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:19.065 06:13:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:19.065 06:13:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:19.065 06:13:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:19.065 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:19.065 06:13:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:19.065 06:13:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:19.065 06:13:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:19.065 06:13:44 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:21:19.065 06:13:44 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:21:19.065 06:13:44 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:21:19.065 06:13:44 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:21:19.065 06:13:44 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:21:19.065 06:13:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:21:19.065 06:13:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:19.065 06:13:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@472 -- # prepare_net_devs 00:21:19.065 06:13:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@434 -- # local -g is_hw=no 00:21:19.065 06:13:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@436 -- # remove_spdk_ns 00:21:19.065 06:13:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:19.065 06:13:44 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:19.065 06:13:44 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:19.065 06:13:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:21:19.065 06:13:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:21:19.065 06:13:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:21:19.065 06:13:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:21:19.065 06:13:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:21:19.065 06:13:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@456 -- # nvmf_veth_init 00:21:19.065 06:13:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:19.065 06:13:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:21:19.065 06:13:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:21:19.065 06:13:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:21:19.065 06:13:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:19.065 06:13:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:21:19.065 06:13:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:19.065 06:13:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:21:19.065 06:13:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:19.065 06:13:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:21:19.065 06:13:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:19.065 06:13:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:19.065 06:13:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:19.065 06:13:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:19.065 06:13:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:19.065 06:13:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:19.065 06:13:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:21:19.065 Cannot find device "nvmf_init_br" 00:21:19.065 06:13:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@162 -- # true 00:21:19.065 06:13:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:21:19.065 Cannot find device "nvmf_init_br2" 00:21:19.065 06:13:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@163 -- # true 00:21:19.065 06:13:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:21:19.065 Cannot find device "nvmf_tgt_br" 00:21:19.065 06:13:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@164 -- # true 00:21:19.065 06:13:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:21:19.065 Cannot find device "nvmf_tgt_br2" 00:21:19.065 06:13:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@165 -- # true 00:21:19.065 06:13:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:21:19.065 Cannot find device "nvmf_init_br" 00:21:19.065 06:13:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@166 -- # true 00:21:19.065 06:13:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:21:19.065 Cannot find device "nvmf_init_br2" 00:21:19.065 06:13:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@167 -- # true 00:21:19.065 06:13:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:21:19.065 Cannot find device "nvmf_tgt_br" 00:21:19.065 06:13:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@168 -- # true 00:21:19.065 06:13:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:21:19.065 Cannot find device "nvmf_tgt_br2" 00:21:19.065 06:13:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@169 -- # true 00:21:19.065 06:13:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:21:19.065 Cannot find device "nvmf_br" 00:21:19.065 06:13:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@170 -- # true 00:21:19.065 06:13:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:21:19.065 Cannot find device "nvmf_init_if" 00:21:19.065 06:13:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@171 -- # true 00:21:19.065 06:13:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:21:19.065 Cannot find device "nvmf_init_if2" 00:21:19.065 06:13:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@172 -- # true 00:21:19.065 06:13:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:19.065 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:19.065 06:13:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@173 -- # true 00:21:19.065 06:13:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:19.065 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:19.065 06:13:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@174 -- # true 00:21:19.065 06:13:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:21:19.065 06:13:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:19.065 06:13:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:21:19.065 06:13:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:19.065 06:13:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:19.325 06:13:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:19.325 06:13:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:19.325 06:13:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:19.325 06:13:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:21:19.325 06:13:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:21:19.325 06:13:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:21:19.325 06:13:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:21:19.325 06:13:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:21:19.325 06:13:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:21:19.325 06:13:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:21:19.325 06:13:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:21:19.325 06:13:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:21:19.325 06:13:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:19.325 06:13:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:19.325 06:13:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:19.325 06:13:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:21:19.325 06:13:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:21:19.325 06:13:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:21:19.325 06:13:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:21:19.325 06:13:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:19.325 06:13:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:19.325 06:13:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:19.325 06:13:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:21:19.325 06:13:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:21:19.325 06:13:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:21:19.325 06:13:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:19.325 06:13:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:21:19.325 06:13:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:21:19.325 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:19.325 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.089 ms 00:21:19.325 00:21:19.325 --- 10.0.0.3 ping statistics --- 00:21:19.325 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:19.325 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:21:19.325 06:13:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:21:19.325 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:21:19.325 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.055 ms 00:21:19.325 00:21:19.325 --- 10.0.0.4 ping statistics --- 00:21:19.325 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:19.325 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:21:19.325 06:13:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:19.325 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:19.325 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:21:19.325 00:21:19.325 --- 10.0.0.1 ping statistics --- 00:21:19.325 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:19.325 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:21:19.325 06:13:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:21:19.325 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:19.325 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.102 ms 00:21:19.325 00:21:19.325 --- 10.0.0.2 ping statistics --- 00:21:19.325 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:19.325 rtt min/avg/max/mdev = 0.102/0.102/0.102/0.000 ms 00:21:19.325 06:13:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:19.325 06:13:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@457 -- # return 0 00:21:19.325 06:13:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:21:19.325 06:13:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:19.325 06:13:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:21:19.325 06:13:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:21:19.325 06:13:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:19.325 06:13:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:21:19.325 06:13:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:21:19.325 06:13:44 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:21:19.325 06:13:44 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:21:19.325 06:13:44 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:21:19.325 06:13:44 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:21:19.325 06:13:44 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:19.325 06:13:44 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:21:19.585 ************************************ 00:21:19.585 START TEST nvmf_digest_clean 00:21:19.585 ************************************ 00:21:19.585 06:13:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1125 -- # run_digest 00:21:19.585 06:13:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:21:19.585 06:13:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:21:19.585 06:13:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:21:19.585 06:13:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:21:19.585 06:13:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:21:19.585 06:13:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:21:19.585 06:13:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:19.585 06:13:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:21:19.585 06:13:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@505 -- # nvmfpid=94230 00:21:19.585 06:13:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@506 -- # waitforlisten 94230 00:21:19.585 06:13:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:21:19.585 06:13:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 94230 ']' 00:21:19.585 06:13:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:19.585 06:13:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:19.585 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:19.585 06:13:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:19.585 06:13:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:19.585 06:13:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:21:19.585 [2024-10-01 06:13:45.006229] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:21:19.585 [2024-10-01 06:13:45.006327] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:19.585 [2024-10-01 06:13:45.147676] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:19.585 [2024-10-01 06:13:45.189443] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:19.585 [2024-10-01 06:13:45.189501] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:19.585 [2024-10-01 06:13:45.189515] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:19.585 [2024-10-01 06:13:45.189525] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:19.585 [2024-10-01 06:13:45.189534] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:19.585 [2024-10-01 06:13:45.189564] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:21:19.844 06:13:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:19.845 06:13:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:21:19.845 06:13:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:21:19.845 06:13:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:19.845 06:13:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:21:19.845 06:13:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:19.845 06:13:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:21:19.845 06:13:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:21:19.845 06:13:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:21:19.845 06:13:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:19.845 06:13:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:21:19.845 [2024-10-01 06:13:45.338916] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:19.845 null0 00:21:19.845 [2024-10-01 06:13:45.373458] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:19.845 [2024-10-01 06:13:45.397591] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:21:19.845 06:13:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:19.845 06:13:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:21:19.845 06:13:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:21:19.845 06:13:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:21:19.845 06:13:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:21:19.845 06:13:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:21:19.845 06:13:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:21:19.845 06:13:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:21:19.845 06:13:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=94256 00:21:19.845 06:13:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 94256 /var/tmp/bperf.sock 00:21:19.845 06:13:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:21:19.845 06:13:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 94256 ']' 00:21:19.845 06:13:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:19.845 06:13:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:19.845 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:19.845 06:13:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:19.845 06:13:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:19.845 06:13:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:21:19.845 [2024-10-01 06:13:45.457001] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:21:19.845 [2024-10-01 06:13:45.457093] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94256 ] 00:21:20.104 [2024-10-01 06:13:45.594703] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:20.104 [2024-10-01 06:13:45.636130] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:21:20.363 06:13:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:20.363 06:13:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:21:20.363 06:13:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:21:20.363 06:13:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:21:20.363 06:13:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:21:20.622 [2024-10-01 06:13:46.030780] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:20.622 06:13:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:20.622 06:13:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:20.881 nvme0n1 00:21:20.881 06:13:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:21:20.881 06:13:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:21:21.140 Running I/O for 2 seconds... 00:21:23.013 17653.00 IOPS, 68.96 MiB/s 17907.00 IOPS, 69.95 MiB/s 00:21:23.013 Latency(us) 00:21:23.013 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:23.013 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:21:23.013 nvme0n1 : 2.01 17912.84 69.97 0.00 0.00 7137.52 6613.18 20614.05 00:21:23.013 =================================================================================================================== 00:21:23.013 Total : 17912.84 69.97 0.00 0.00 7137.52 6613.18 20614.05 00:21:23.013 { 00:21:23.013 "results": [ 00:21:23.013 { 00:21:23.013 "job": "nvme0n1", 00:21:23.013 "core_mask": "0x2", 00:21:23.013 "workload": "randread", 00:21:23.013 "status": "finished", 00:21:23.013 "queue_depth": 128, 00:21:23.013 "io_size": 4096, 00:21:23.013 "runtime": 2.013584, 00:21:23.013 "iops": 17912.836017767324, 00:21:23.013 "mibps": 69.97201569440361, 00:21:23.013 "io_failed": 0, 00:21:23.013 "io_timeout": 0, 00:21:23.013 "avg_latency_us": 7137.518295590018, 00:21:23.013 "min_latency_us": 6613.178181818182, 00:21:23.013 "max_latency_us": 20614.05090909091 00:21:23.013 } 00:21:23.013 ], 00:21:23.013 "core_count": 1 00:21:23.013 } 00:21:23.013 06:13:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:21:23.013 06:13:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:21:23.013 06:13:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:21:23.013 06:13:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:21:23.013 | select(.opcode=="crc32c") 00:21:23.013 | "\(.module_name) \(.executed)"' 00:21:23.013 06:13:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:21:23.272 06:13:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:21:23.272 06:13:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:21:23.272 06:13:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:21:23.272 06:13:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:21:23.272 06:13:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 94256 00:21:23.272 06:13:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 94256 ']' 00:21:23.272 06:13:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 94256 00:21:23.272 06:13:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:21:23.272 06:13:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:23.272 06:13:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 94256 00:21:23.551 06:13:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:21:23.551 06:13:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:21:23.551 killing process with pid 94256 00:21:23.551 06:13:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 94256' 00:21:23.551 06:13:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 94256 00:21:23.551 Received shutdown signal, test time was about 2.000000 seconds 00:21:23.551 00:21:23.551 Latency(us) 00:21:23.551 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:23.551 =================================================================================================================== 00:21:23.551 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:23.551 06:13:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 94256 00:21:23.551 06:13:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:21:23.551 06:13:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:21:23.551 06:13:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:21:23.551 06:13:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:21:23.551 06:13:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:21:23.551 06:13:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:21:23.551 06:13:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:21:23.551 06:13:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=94303 00:21:23.551 06:13:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 94303 /var/tmp/bperf.sock 00:21:23.551 06:13:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:21:23.551 06:13:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 94303 ']' 00:21:23.551 06:13:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:23.551 06:13:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:23.551 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:23.551 06:13:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:23.551 06:13:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:23.551 06:13:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:21:23.551 [2024-10-01 06:13:49.090145] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:21:23.551 [2024-10-01 06:13:49.090259] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94303 ] 00:21:23.551 I/O size of 131072 is greater than zero copy threshold (65536). 00:21:23.551 Zero copy mechanism will not be used. 00:21:23.816 [2024-10-01 06:13:49.228125] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:23.816 [2024-10-01 06:13:49.261165] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:21:23.816 06:13:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:23.816 06:13:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:21:23.816 06:13:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:21:23.816 06:13:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:21:23.816 06:13:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:21:24.073 [2024-10-01 06:13:49.603074] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:24.073 06:13:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:24.073 06:13:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:24.331 nvme0n1 00:21:24.331 06:13:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:21:24.331 06:13:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:21:24.589 I/O size of 131072 is greater than zero copy threshold (65536). 00:21:24.589 Zero copy mechanism will not be used. 00:21:24.589 Running I/O for 2 seconds... 00:21:26.463 8720.00 IOPS, 1090.00 MiB/s 8752.00 IOPS, 1094.00 MiB/s 00:21:26.463 Latency(us) 00:21:26.464 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:26.464 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:21:26.464 nvme0n1 : 2.00 8748.06 1093.51 0.00 0.00 1826.27 1638.40 9353.77 00:21:26.464 =================================================================================================================== 00:21:26.464 Total : 8748.06 1093.51 0.00 0.00 1826.27 1638.40 9353.77 00:21:26.464 { 00:21:26.464 "results": [ 00:21:26.464 { 00:21:26.464 "job": "nvme0n1", 00:21:26.464 "core_mask": "0x2", 00:21:26.464 "workload": "randread", 00:21:26.464 "status": "finished", 00:21:26.464 "queue_depth": 16, 00:21:26.464 "io_size": 131072, 00:21:26.464 "runtime": 2.002729, 00:21:26.464 "iops": 8748.063267671263, 00:21:26.464 "mibps": 1093.5079084589079, 00:21:26.464 "io_failed": 0, 00:21:26.464 "io_timeout": 0, 00:21:26.464 "avg_latency_us": 1826.2668061436282, 00:21:26.464 "min_latency_us": 1638.4, 00:21:26.464 "max_latency_us": 9353.774545454546 00:21:26.464 } 00:21:26.464 ], 00:21:26.464 "core_count": 1 00:21:26.464 } 00:21:26.722 06:13:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:21:26.722 06:13:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:21:26.722 06:13:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:21:26.723 06:13:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:21:26.723 06:13:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:21:26.723 | select(.opcode=="crc32c") 00:21:26.723 | "\(.module_name) \(.executed)"' 00:21:26.982 06:13:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:21:26.982 06:13:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:21:26.982 06:13:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:21:26.982 06:13:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:21:26.982 06:13:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 94303 00:21:26.982 06:13:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 94303 ']' 00:21:26.982 06:13:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 94303 00:21:26.982 06:13:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:21:26.982 06:13:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:26.982 06:13:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 94303 00:21:26.982 06:13:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:21:26.982 06:13:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:21:26.982 killing process with pid 94303 00:21:26.982 06:13:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 94303' 00:21:26.982 06:13:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 94303 00:21:26.982 Received shutdown signal, test time was about 2.000000 seconds 00:21:26.982 00:21:26.982 Latency(us) 00:21:26.982 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:26.982 =================================================================================================================== 00:21:26.982 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:26.982 06:13:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 94303 00:21:26.982 06:13:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:21:26.982 06:13:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:21:26.982 06:13:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:21:26.982 06:13:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:21:26.982 06:13:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:21:26.982 06:13:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:21:26.982 06:13:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:21:26.982 06:13:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:21:26.982 06:13:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=94356 00:21:26.982 06:13:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 94356 /var/tmp/bperf.sock 00:21:26.982 06:13:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 94356 ']' 00:21:26.982 06:13:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:26.982 06:13:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:26.982 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:26.982 06:13:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:26.982 06:13:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:26.982 06:13:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:21:26.982 [2024-10-01 06:13:52.559349] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:21:26.982 [2024-10-01 06:13:52.559451] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94356 ] 00:21:27.242 [2024-10-01 06:13:52.689621] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:27.242 [2024-10-01 06:13:52.722544] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:21:27.242 06:13:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:27.242 06:13:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:21:27.242 06:13:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:21:27.242 06:13:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:21:27.242 06:13:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:21:27.500 [2024-10-01 06:13:53.041037] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:27.500 06:13:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:27.500 06:13:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:27.759 nvme0n1 00:21:27.759 06:13:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:21:27.759 06:13:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:21:28.018 Running I/O for 2 seconds... 00:21:29.891 19305.00 IOPS, 75.41 MiB/s 19177.50 IOPS, 74.91 MiB/s 00:21:29.891 Latency(us) 00:21:29.891 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:29.891 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:21:29.891 nvme0n1 : 2.01 19220.20 75.08 0.00 0.00 6654.40 6196.13 15252.01 00:21:29.891 =================================================================================================================== 00:21:29.891 Total : 19220.20 75.08 0.00 0.00 6654.40 6196.13 15252.01 00:21:29.891 { 00:21:29.891 "results": [ 00:21:29.891 { 00:21:29.891 "job": "nvme0n1", 00:21:29.891 "core_mask": "0x2", 00:21:29.891 "workload": "randwrite", 00:21:29.891 "status": "finished", 00:21:29.891 "queue_depth": 128, 00:21:29.891 "io_size": 4096, 00:21:29.891 "runtime": 2.008824, 00:21:29.891 "iops": 19220.200475502086, 00:21:29.891 "mibps": 75.07890810743002, 00:21:29.891 "io_failed": 0, 00:21:29.891 "io_timeout": 0, 00:21:29.891 "avg_latency_us": 6654.396068470251, 00:21:29.891 "min_latency_us": 6196.130909090909, 00:21:29.891 "max_latency_us": 15252.014545454545 00:21:29.891 } 00:21:29.891 ], 00:21:29.891 "core_count": 1 00:21:29.891 } 00:21:29.891 06:13:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:21:29.891 06:13:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:21:29.891 06:13:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:21:29.891 06:13:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:21:29.891 06:13:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:21:29.891 | select(.opcode=="crc32c") 00:21:29.891 | "\(.module_name) \(.executed)"' 00:21:30.150 06:13:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:21:30.150 06:13:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:21:30.150 06:13:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:21:30.150 06:13:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:21:30.150 06:13:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 94356 00:21:30.150 06:13:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 94356 ']' 00:21:30.150 06:13:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 94356 00:21:30.150 06:13:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:21:30.409 06:13:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:30.409 06:13:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 94356 00:21:30.409 06:13:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:21:30.409 06:13:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:21:30.409 killing process with pid 94356 00:21:30.409 06:13:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 94356' 00:21:30.409 06:13:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 94356 00:21:30.409 Received shutdown signal, test time was about 2.000000 seconds 00:21:30.409 00:21:30.409 Latency(us) 00:21:30.409 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:30.409 =================================================================================================================== 00:21:30.409 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:30.409 06:13:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 94356 00:21:30.409 06:13:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:21:30.409 06:13:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:21:30.409 06:13:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:21:30.409 06:13:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:21:30.409 06:13:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:21:30.409 06:13:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:21:30.409 06:13:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:21:30.409 06:13:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=94403 00:21:30.409 06:13:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 94403 /var/tmp/bperf.sock 00:21:30.409 06:13:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:21:30.409 06:13:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 94403 ']' 00:21:30.409 06:13:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:30.409 06:13:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:30.409 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:30.409 06:13:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:30.409 06:13:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:30.409 06:13:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:21:30.409 I/O size of 131072 is greater than zero copy threshold (65536). 00:21:30.409 Zero copy mechanism will not be used. 00:21:30.409 [2024-10-01 06:13:55.989436] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:21:30.409 [2024-10-01 06:13:55.989553] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94403 ] 00:21:30.668 [2024-10-01 06:13:56.127631] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:30.668 [2024-10-01 06:13:56.161602] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:21:31.603 06:13:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:31.603 06:13:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:21:31.603 06:13:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:21:31.603 06:13:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:21:31.603 06:13:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:21:31.603 [2024-10-01 06:13:57.168802] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:31.603 06:13:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:31.603 06:13:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:32.170 nvme0n1 00:21:32.170 06:13:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:21:32.170 06:13:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:21:32.170 I/O size of 131072 is greater than zero copy threshold (65536). 00:21:32.170 Zero copy mechanism will not be used. 00:21:32.170 Running I/O for 2 seconds... 00:21:34.125 7516.00 IOPS, 939.50 MiB/s 7520.50 IOPS, 940.06 MiB/s 00:21:34.125 Latency(us) 00:21:34.125 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:34.125 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:21:34.125 nvme0n1 : 2.00 7518.10 939.76 0.00 0.00 2123.17 1563.93 4617.31 00:21:34.125 =================================================================================================================== 00:21:34.125 Total : 7518.10 939.76 0.00 0.00 2123.17 1563.93 4617.31 00:21:34.125 { 00:21:34.125 "results": [ 00:21:34.125 { 00:21:34.125 "job": "nvme0n1", 00:21:34.125 "core_mask": "0x2", 00:21:34.125 "workload": "randwrite", 00:21:34.125 "status": "finished", 00:21:34.125 "queue_depth": 16, 00:21:34.125 "io_size": 131072, 00:21:34.125 "runtime": 2.003831, 00:21:34.125 "iops": 7518.099081209943, 00:21:34.125 "mibps": 939.7623851512428, 00:21:34.125 "io_failed": 0, 00:21:34.125 "io_timeout": 0, 00:21:34.125 "avg_latency_us": 2123.1732108740907, 00:21:34.125 "min_latency_us": 1563.9272727272728, 00:21:34.125 "max_latency_us": 4617.309090909091 00:21:34.125 } 00:21:34.125 ], 00:21:34.125 "core_count": 1 00:21:34.125 } 00:21:34.125 06:13:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:21:34.125 06:13:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:21:34.125 06:13:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:21:34.125 06:13:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:21:34.125 06:13:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:21:34.126 | select(.opcode=="crc32c") 00:21:34.126 | "\(.module_name) \(.executed)"' 00:21:34.385 06:13:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:21:34.385 06:13:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:21:34.385 06:13:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:21:34.385 06:13:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:21:34.385 06:13:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 94403 00:21:34.385 06:13:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 94403 ']' 00:21:34.385 06:13:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 94403 00:21:34.385 06:13:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:21:34.385 06:13:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:34.385 06:13:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 94403 00:21:34.385 06:13:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:21:34.385 killing process with pid 94403 00:21:34.385 06:13:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:21:34.385 06:13:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 94403' 00:21:34.385 06:13:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 94403 00:21:34.385 Received shutdown signal, test time was about 2.000000 seconds 00:21:34.385 00:21:34.385 Latency(us) 00:21:34.385 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:34.385 =================================================================================================================== 00:21:34.385 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:34.385 06:13:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 94403 00:21:34.644 06:14:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 94230 00:21:34.644 06:14:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 94230 ']' 00:21:34.644 06:14:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 94230 00:21:34.644 06:14:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:21:34.644 06:14:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:34.644 06:14:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 94230 00:21:34.644 06:14:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:34.644 06:14:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:34.644 killing process with pid 94230 00:21:34.644 06:14:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 94230' 00:21:34.644 06:14:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 94230 00:21:34.644 06:14:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 94230 00:21:34.644 00:21:34.644 real 0m15.315s 00:21:34.644 user 0m29.918s 00:21:34.644 sys 0m4.412s 00:21:34.644 06:14:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:34.644 06:14:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:21:34.644 ************************************ 00:21:34.644 END TEST nvmf_digest_clean 00:21:34.644 ************************************ 00:21:34.904 06:14:00 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:21:34.904 06:14:00 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:21:34.904 06:14:00 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:34.904 06:14:00 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:21:34.904 ************************************ 00:21:34.904 START TEST nvmf_digest_error 00:21:34.904 ************************************ 00:21:34.904 06:14:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1125 -- # run_digest_error 00:21:34.904 06:14:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:21:34.904 06:14:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:21:34.904 06:14:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:34.904 06:14:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:34.904 06:14:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@505 -- # nvmfpid=94482 00:21:34.904 06:14:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@506 -- # waitforlisten 94482 00:21:34.904 06:14:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 94482 ']' 00:21:34.904 06:14:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:21:34.904 06:14:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:34.904 06:14:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:34.904 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:34.904 06:14:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:34.904 06:14:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:34.904 06:14:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:34.904 [2024-10-01 06:14:00.375250] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:21:34.904 [2024-10-01 06:14:00.375358] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:34.904 [2024-10-01 06:14:00.506359] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:35.164 [2024-10-01 06:14:00.539204] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:35.164 [2024-10-01 06:14:00.539262] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:35.164 [2024-10-01 06:14:00.539272] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:35.164 [2024-10-01 06:14:00.539280] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:35.164 [2024-10-01 06:14:00.539286] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:35.164 [2024-10-01 06:14:00.539319] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:21:35.731 06:14:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:35.731 06:14:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:21:35.731 06:14:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:21:35.731 06:14:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:35.731 06:14:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:35.991 06:14:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:35.991 06:14:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:21:35.991 06:14:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:35.991 06:14:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:35.991 [2024-10-01 06:14:01.363798] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:21:35.991 06:14:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:35.991 06:14:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:21:35.991 06:14:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:21:35.991 06:14:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:35.991 06:14:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:35.991 [2024-10-01 06:14:01.397141] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:35.991 null0 00:21:35.991 [2024-10-01 06:14:01.427373] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:35.991 [2024-10-01 06:14:01.451471] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:21:35.991 06:14:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:35.991 06:14:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:21:35.991 06:14:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:21:35.991 06:14:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:21:35.991 06:14:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:21:35.991 06:14:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:21:35.991 06:14:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=94514 00:21:35.991 06:14:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 94514 /var/tmp/bperf.sock 00:21:35.991 06:14:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:21:35.991 06:14:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 94514 ']' 00:21:35.991 06:14:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:35.991 06:14:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:35.991 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:35.991 06:14:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:35.991 06:14:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:35.991 06:14:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:35.991 [2024-10-01 06:14:01.514305] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:21:35.991 [2024-10-01 06:14:01.514404] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94514 ] 00:21:36.250 [2024-10-01 06:14:01.655039] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:36.250 [2024-10-01 06:14:01.696920] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:21:36.250 [2024-10-01 06:14:01.731264] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:37.189 06:14:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:37.189 06:14:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:21:37.189 06:14:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:21:37.189 06:14:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:21:37.189 06:14:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:21:37.189 06:14:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:37.189 06:14:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:37.189 06:14:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:37.189 06:14:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:37.189 06:14:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:37.449 nvme0n1 00:21:37.449 06:14:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:21:37.449 06:14:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:37.449 06:14:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:37.449 06:14:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:37.449 06:14:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:21:37.449 06:14:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:21:37.709 Running I/O for 2 seconds... 00:21:37.709 [2024-10-01 06:14:03.159310] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5b510) 00:21:37.709 [2024-10-01 06:14:03.159367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14072 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.709 [2024-10-01 06:14:03.159380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.709 [2024-10-01 06:14:03.174058] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5b510) 00:21:37.709 [2024-10-01 06:14:03.174107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6157 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.709 [2024-10-01 06:14:03.174119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.709 [2024-10-01 06:14:03.188307] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5b510) 00:21:37.709 [2024-10-01 06:14:03.188354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24662 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.709 [2024-10-01 06:14:03.188366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.709 [2024-10-01 06:14:03.202579] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5b510) 00:21:37.709 [2024-10-01 06:14:03.202626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17446 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.709 [2024-10-01 06:14:03.202637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.709 [2024-10-01 06:14:03.216894] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5b510) 00:21:37.709 [2024-10-01 06:14:03.216949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15201 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.709 [2024-10-01 06:14:03.216961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.709 [2024-10-01 06:14:03.230978] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5b510) 00:21:37.709 [2024-10-01 06:14:03.231024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:797 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.709 [2024-10-01 06:14:03.231035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.709 [2024-10-01 06:14:03.245236] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5b510) 00:21:37.709 [2024-10-01 06:14:03.245283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16891 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.709 [2024-10-01 06:14:03.245295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.709 [2024-10-01 06:14:03.259292] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5b510) 00:21:37.709 [2024-10-01 06:14:03.259338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16265 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.709 [2024-10-01 06:14:03.259350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.709 [2024-10-01 06:14:03.273376] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5b510) 00:21:37.709 [2024-10-01 06:14:03.273422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:16910 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.709 [2024-10-01 06:14:03.273433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.709 [2024-10-01 06:14:03.287366] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5b510) 00:21:37.709 [2024-10-01 06:14:03.287413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:16676 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.709 [2024-10-01 06:14:03.287424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.709 [2024-10-01 06:14:03.301430] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5b510) 00:21:37.709 [2024-10-01 06:14:03.301476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:18046 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.709 [2024-10-01 06:14:03.301487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.709 [2024-10-01 06:14:03.315502] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5b510) 00:21:37.709 [2024-10-01 06:14:03.315549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:13625 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.709 [2024-10-01 06:14:03.315560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.988 [2024-10-01 06:14:03.331772] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5b510) 00:21:37.988 [2024-10-01 06:14:03.331823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:7964 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.988 [2024-10-01 06:14:03.331836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.988 [2024-10-01 06:14:03.350326] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5b510) 00:21:37.988 [2024-10-01 06:14:03.350377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:540 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.988 [2024-10-01 06:14:03.350389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.988 [2024-10-01 06:14:03.365804] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5b510) 00:21:37.988 [2024-10-01 06:14:03.365851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:16276 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.988 [2024-10-01 06:14:03.365864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.988 [2024-10-01 06:14:03.380716] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5b510) 00:21:37.988 [2024-10-01 06:14:03.380763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:7564 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.988 [2024-10-01 06:14:03.380774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.988 [2024-10-01 06:14:03.394852] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5b510) 00:21:37.988 [2024-10-01 06:14:03.394898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:11613 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.988 [2024-10-01 06:14:03.394909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.988 [2024-10-01 06:14:03.408991] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5b510) 00:21:37.988 [2024-10-01 06:14:03.409036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:21041 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.988 [2024-10-01 06:14:03.409047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.988 [2024-10-01 06:14:03.423087] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5b510) 00:21:37.988 [2024-10-01 06:14:03.423134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:167 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.988 [2024-10-01 06:14:03.423146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.988 [2024-10-01 06:14:03.437166] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5b510) 00:21:37.988 [2024-10-01 06:14:03.437213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:2051 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.988 [2024-10-01 06:14:03.437225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.988 [2024-10-01 06:14:03.451298] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5b510) 00:21:37.988 [2024-10-01 06:14:03.451345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:15440 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.988 [2024-10-01 06:14:03.451356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.988 [2024-10-01 06:14:03.465358] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5b510) 00:21:37.988 [2024-10-01 06:14:03.465405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:17343 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.988 [2024-10-01 06:14:03.465416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.988 [2024-10-01 06:14:03.479338] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5b510) 00:21:37.988 [2024-10-01 06:14:03.479385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:15565 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.988 [2024-10-01 06:14:03.479396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.988 [2024-10-01 06:14:03.493502] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5b510) 00:21:37.988 [2024-10-01 06:14:03.493548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:340 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.988 [2024-10-01 06:14:03.493559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.988 [2024-10-01 06:14:03.507494] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5b510) 00:21:37.988 [2024-10-01 06:14:03.507540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:2595 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.988 [2024-10-01 06:14:03.507551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.988 [2024-10-01 06:14:03.521691] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5b510) 00:21:37.988 [2024-10-01 06:14:03.521738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:16723 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.988 [2024-10-01 06:14:03.521749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.988 [2024-10-01 06:14:03.535784] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5b510) 00:21:37.988 [2024-10-01 06:14:03.535831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:21709 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.988 [2024-10-01 06:14:03.535842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.988 [2024-10-01 06:14:03.550183] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5b510) 00:21:37.988 [2024-10-01 06:14:03.550230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:3424 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.988 [2024-10-01 06:14:03.550241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.988 [2024-10-01 06:14:03.564066] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5b510) 00:21:37.988 [2024-10-01 06:14:03.564115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:20918 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.988 [2024-10-01 06:14:03.564126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:37.988 [2024-10-01 06:14:03.577823] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5b510) 00:21:37.988 [2024-10-01 06:14:03.577869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:1957 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:37.988 [2024-10-01 06:14:03.577880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:38.259 [2024-10-01 06:14:03.595362] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5b510) 00:21:38.259 [2024-10-01 06:14:03.595414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:14625 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.259 [2024-10-01 06:14:03.595428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:38.259 [2024-10-01 06:14:03.612171] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5b510) 00:21:38.259 [2024-10-01 06:14:03.612223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:19772 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.259 [2024-10-01 06:14:03.612237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:38.259 [2024-10-01 06:14:03.628309] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5b510) 00:21:38.259 [2024-10-01 06:14:03.628354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:23707 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.259 [2024-10-01 06:14:03.628366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:38.259 [2024-10-01 06:14:03.642417] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5b510) 00:21:38.259 [2024-10-01 06:14:03.642466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:9123 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.259 [2024-10-01 06:14:03.642478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:38.259 [2024-10-01 06:14:03.656541] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5b510) 00:21:38.259 [2024-10-01 06:14:03.656589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:6126 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.259 [2024-10-01 06:14:03.656600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:38.259 [2024-10-01 06:14:03.670471] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5b510) 00:21:38.259 [2024-10-01 06:14:03.670518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:7468 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.259 [2024-10-01 06:14:03.670529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:38.259 [2024-10-01 06:14:03.684446] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5b510) 00:21:38.259 [2024-10-01 06:14:03.684492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:1294 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.259 [2024-10-01 06:14:03.684503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:38.259 [2024-10-01 06:14:03.698439] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5b510) 00:21:38.259 [2024-10-01 06:14:03.698486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:14441 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.259 [2024-10-01 06:14:03.698498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:38.259 [2024-10-01 06:14:03.712470] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5b510) 00:21:38.260 [2024-10-01 06:14:03.712516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:12043 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.260 [2024-10-01 06:14:03.712526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:38.260 [2024-10-01 06:14:03.726529] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5b510) 00:21:38.260 [2024-10-01 06:14:03.726574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:22178 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.260 [2024-10-01 06:14:03.726585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:38.260 [2024-10-01 06:14:03.742607] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5b510) 00:21:38.260 [2024-10-01 06:14:03.742654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:3073 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.260 [2024-10-01 06:14:03.742666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:38.260 [2024-10-01 06:14:03.759596] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5b510) 00:21:38.260 [2024-10-01 06:14:03.759643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:13632 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.260 [2024-10-01 06:14:03.759655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:38.260 [2024-10-01 06:14:03.775086] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5b510) 00:21:38.260 [2024-10-01 06:14:03.775134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:5634 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.260 [2024-10-01 06:14:03.775147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:38.260 [2024-10-01 06:14:03.789662] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5b510) 00:21:38.260 [2024-10-01 06:14:03.789709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:4673 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.260 [2024-10-01 06:14:03.789720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:38.260 [2024-10-01 06:14:03.803833] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5b510) 00:21:38.260 [2024-10-01 06:14:03.803880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:8978 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.260 [2024-10-01 06:14:03.803938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:38.260 [2024-10-01 06:14:03.817786] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5b510) 00:21:38.260 [2024-10-01 06:14:03.817833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:18342 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.260 [2024-10-01 06:14:03.817844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:38.260 [2024-10-01 06:14:03.831667] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5b510) 00:21:38.260 [2024-10-01 06:14:03.831713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:4106 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.260 [2024-10-01 06:14:03.831724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:38.260 [2024-10-01 06:14:03.845782] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5b510) 00:21:38.260 [2024-10-01 06:14:03.845827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:11744 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.260 [2024-10-01 06:14:03.845839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:38.260 [2024-10-01 06:14:03.859777] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5b510) 00:21:38.260 [2024-10-01 06:14:03.859823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:9873 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.260 [2024-10-01 06:14:03.859834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:38.520 [2024-10-01 06:14:03.874545] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5b510) 00:21:38.520 [2024-10-01 06:14:03.874593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:6469 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.520 [2024-10-01 06:14:03.874605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:38.520 [2024-10-01 06:14:03.889307] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5b510) 00:21:38.520 [2024-10-01 06:14:03.889353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:21000 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.520 [2024-10-01 06:14:03.889364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:38.520 [2024-10-01 06:14:03.903298] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5b510) 00:21:38.520 [2024-10-01 06:14:03.903343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:14975 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.520 [2024-10-01 06:14:03.903355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:38.520 [2024-10-01 06:14:03.917283] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5b510) 00:21:38.520 [2024-10-01 06:14:03.917329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:21373 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.520 [2024-10-01 06:14:03.917340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:38.520 [2024-10-01 06:14:03.931114] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5b510) 00:21:38.520 [2024-10-01 06:14:03.931160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:12793 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.520 [2024-10-01 06:14:03.931171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:38.520 [2024-10-01 06:14:03.945059] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5b510) 00:21:38.520 [2024-10-01 06:14:03.945105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:8343 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.520 [2024-10-01 06:14:03.945116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:38.520 [2024-10-01 06:14:03.959021] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5b510) 00:21:38.520 [2024-10-01 06:14:03.959066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:24146 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.520 [2024-10-01 06:14:03.959077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:38.520 [2024-10-01 06:14:03.972887] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5b510) 00:21:38.520 [2024-10-01 06:14:03.972944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:16776 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.520 [2024-10-01 06:14:03.972958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:38.520 [2024-10-01 06:14:03.986660] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5b510) 00:21:38.520 [2024-10-01 06:14:03.986705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:24013 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.520 [2024-10-01 06:14:03.986716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:38.520 [2024-10-01 06:14:04.000626] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5b510) 00:21:38.520 [2024-10-01 06:14:04.000671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:4604 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.520 [2024-10-01 06:14:04.000682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:38.520 [2024-10-01 06:14:04.014471] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5b510) 00:21:38.520 [2024-10-01 06:14:04.014516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:22206 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.520 [2024-10-01 06:14:04.014528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:38.520 [2024-10-01 06:14:04.028436] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5b510) 00:21:38.520 [2024-10-01 06:14:04.028481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:7438 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.520 [2024-10-01 06:14:04.028492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:38.520 [2024-10-01 06:14:04.042322] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5b510) 00:21:38.520 [2024-10-01 06:14:04.042367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:9981 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.520 [2024-10-01 06:14:04.042379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:38.520 [2024-10-01 06:14:04.056374] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5b510) 00:21:38.520 [2024-10-01 06:14:04.056419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:2096 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.520 [2024-10-01 06:14:04.056430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:38.520 [2024-10-01 06:14:04.078549] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5b510) 00:21:38.520 [2024-10-01 06:14:04.078597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:25174 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.520 [2024-10-01 06:14:04.078609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:38.520 [2024-10-01 06:14:04.093114] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5b510) 00:21:38.520 [2024-10-01 06:14:04.093161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:2336 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.520 [2024-10-01 06:14:04.093172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:38.520 [2024-10-01 06:14:04.107838] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5b510) 00:21:38.520 [2024-10-01 06:14:04.107885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:24565 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.520 [2024-10-01 06:14:04.107948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:38.521 [2024-10-01 06:14:04.124638] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5b510) 00:21:38.521 [2024-10-01 06:14:04.124687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:3801 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.521 [2024-10-01 06:14:04.124698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:38.780 17205.00 IOPS, 67.21 MiB/s [2024-10-01 06:14:04.143310] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5b510) 00:21:38.780 [2024-10-01 06:14:04.143358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:6876 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.780 [2024-10-01 06:14:04.143369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:38.780 [2024-10-01 06:14:04.158643] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5b510) 00:21:38.780 [2024-10-01 06:14:04.158692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:9598 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.780 [2024-10-01 06:14:04.158704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:38.780 [2024-10-01 06:14:04.173673] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5b510) 00:21:38.780 [2024-10-01 06:14:04.173722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:7504 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.780 [2024-10-01 06:14:04.173734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:38.780 [2024-10-01 06:14:04.188824] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5b510) 00:21:38.780 [2024-10-01 06:14:04.188872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:9551 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.780 [2024-10-01 06:14:04.188884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:38.780 [2024-10-01 06:14:04.203342] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5b510) 00:21:38.780 [2024-10-01 06:14:04.203389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:5958 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.780 [2024-10-01 06:14:04.203401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:38.780 [2024-10-01 06:14:04.218128] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5b510) 00:21:38.780 [2024-10-01 06:14:04.218175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:7319 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.780 [2024-10-01 06:14:04.218187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:38.780 [2024-10-01 06:14:04.233020] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5b510) 00:21:38.780 [2024-10-01 06:14:04.233067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:12249 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.780 [2024-10-01 06:14:04.233079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:38.780 [2024-10-01 06:14:04.247878] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5b510) 00:21:38.780 [2024-10-01 06:14:04.247972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:6299 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.780 [2024-10-01 06:14:04.247984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:38.780 [2024-10-01 06:14:04.262706] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5b510) 00:21:38.780 [2024-10-01 06:14:04.262765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:23990 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.780 [2024-10-01 06:14:04.262778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:38.780 [2024-10-01 06:14:04.277620] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5b510) 00:21:38.780 [2024-10-01 06:14:04.277669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:13706 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.780 [2024-10-01 06:14:04.277680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:38.780 [2024-10-01 06:14:04.292483] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5b510) 00:21:38.780 [2024-10-01 06:14:04.292531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:10892 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.780 [2024-10-01 06:14:04.292543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:38.780 [2024-10-01 06:14:04.307637] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5b510) 00:21:38.780 [2024-10-01 06:14:04.307685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:1394 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.780 [2024-10-01 06:14:04.307696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:38.780 [2024-10-01 06:14:04.322059] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5b510) 00:21:38.780 [2024-10-01 06:14:04.322106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:24959 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.780 [2024-10-01 06:14:04.322118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:38.780 [2024-10-01 06:14:04.336063] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5b510) 00:21:38.781 [2024-10-01 06:14:04.336111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:12840 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.781 [2024-10-01 06:14:04.336123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:38.781 [2024-10-01 06:14:04.350410] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5b510) 00:21:38.781 [2024-10-01 06:14:04.350456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:18580 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.781 [2024-10-01 06:14:04.350467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:38.781 [2024-10-01 06:14:04.364481] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5b510) 00:21:38.781 [2024-10-01 06:14:04.364528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:20029 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.781 [2024-10-01 06:14:04.364539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:38.781 [2024-10-01 06:14:04.378565] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5b510) 00:21:38.781 [2024-10-01 06:14:04.378611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:17701 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.781 [2024-10-01 06:14:04.378623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:38.781 [2024-10-01 06:14:04.393307] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5b510) 00:21:38.781 [2024-10-01 06:14:04.393356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:21743 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:38.781 [2024-10-01 06:14:04.393369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:39.040 [2024-10-01 06:14:04.408551] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5b510) 00:21:39.040 [2024-10-01 06:14:04.408598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:24397 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.040 [2024-10-01 06:14:04.408609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:39.040 [2024-10-01 06:14:04.422766] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5b510) 00:21:39.040 [2024-10-01 06:14:04.422813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:19732 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.040 [2024-10-01 06:14:04.422824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:39.040 [2024-10-01 06:14:04.436891] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5b510) 00:21:39.040 [2024-10-01 06:14:04.436945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:11356 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.041 [2024-10-01 06:14:04.436958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:39.041 [2024-10-01 06:14:04.451020] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5b510) 00:21:39.041 [2024-10-01 06:14:04.451065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:12047 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.041 [2024-10-01 06:14:04.451076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:39.041 [2024-10-01 06:14:04.465102] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5b510) 00:21:39.041 [2024-10-01 06:14:04.465148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:14403 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.041 [2024-10-01 06:14:04.465160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:39.041 [2024-10-01 06:14:04.479220] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5b510) 00:21:39.041 [2024-10-01 06:14:04.479266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:11709 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.041 [2024-10-01 06:14:04.479277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:39.041 [2024-10-01 06:14:04.493287] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5b510) 00:21:39.041 [2024-10-01 06:14:04.493333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:22023 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.041 [2024-10-01 06:14:04.493344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:39.041 [2024-10-01 06:14:04.507307] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5b510) 00:21:39.041 [2024-10-01 06:14:04.507353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:20072 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.041 [2024-10-01 06:14:04.507364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:39.041 [2024-10-01 06:14:04.521485] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5b510) 00:21:39.041 [2024-10-01 06:14:04.521531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:14128 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.041 [2024-10-01 06:14:04.521542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:39.041 [2024-10-01 06:14:04.535520] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5b510) 00:21:39.041 [2024-10-01 06:14:04.535565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:6066 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.041 [2024-10-01 06:14:04.535579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:39.041 [2024-10-01 06:14:04.549758] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5b510) 00:21:39.041 [2024-10-01 06:14:04.549804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:16954 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.041 [2024-10-01 06:14:04.549815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:39.041 [2024-10-01 06:14:04.563808] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5b510) 00:21:39.041 [2024-10-01 06:14:04.563854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:6025 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.041 [2024-10-01 06:14:04.563866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:39.041 [2024-10-01 06:14:04.577829] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5b510) 00:21:39.041 [2024-10-01 06:14:04.577875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:19457 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.041 [2024-10-01 06:14:04.577887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:39.041 [2024-10-01 06:14:04.591759] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5b510) 00:21:39.041 [2024-10-01 06:14:04.591806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23575 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.041 [2024-10-01 06:14:04.591817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:39.041 [2024-10-01 06:14:04.605989] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5b510) 00:21:39.041 [2024-10-01 06:14:04.606035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:24632 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.041 [2024-10-01 06:14:04.606046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:39.041 [2024-10-01 06:14:04.619971] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5b510) 00:21:39.041 [2024-10-01 06:14:04.620017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:10161 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.041 [2024-10-01 06:14:04.620029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:39.041 [2024-10-01 06:14:04.633992] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5b510) 00:21:39.041 [2024-10-01 06:14:04.634039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:5330 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.041 [2024-10-01 06:14:04.634050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:39.041 [2024-10-01 06:14:04.648515] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5b510) 00:21:39.041 [2024-10-01 06:14:04.648561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:25541 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.041 [2024-10-01 06:14:04.648572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:39.301 [2024-10-01 06:14:04.663804] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5b510) 00:21:39.301 [2024-10-01 06:14:04.663850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:14453 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.301 [2024-10-01 06:14:04.663862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:39.301 [2024-10-01 06:14:04.677985] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5b510) 00:21:39.301 [2024-10-01 06:14:04.678039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:19152 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.301 [2024-10-01 06:14:04.678052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:39.301 [2024-10-01 06:14:04.692101] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5b510) 00:21:39.301 [2024-10-01 06:14:04.692149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:5100 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.301 [2024-10-01 06:14:04.692161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:39.301 [2024-10-01 06:14:04.706083] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5b510) 00:21:39.301 [2024-10-01 06:14:04.706128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:13637 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.301 [2024-10-01 06:14:04.706140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:39.301 [2024-10-01 06:14:04.720238] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5b510) 00:21:39.301 [2024-10-01 06:14:04.720285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:11923 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.301 [2024-10-01 06:14:04.720312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:39.301 [2024-10-01 06:14:04.734238] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5b510) 00:21:39.301 [2024-10-01 06:14:04.734283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:13702 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.301 [2024-10-01 06:14:04.734295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:39.301 [2024-10-01 06:14:04.748483] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5b510) 00:21:39.301 [2024-10-01 06:14:04.748529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:17868 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.301 [2024-10-01 06:14:04.748540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:39.301 [2024-10-01 06:14:04.764473] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5b510) 00:21:39.301 [2024-10-01 06:14:04.764521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:25294 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.301 [2024-10-01 06:14:04.764532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:39.301 [2024-10-01 06:14:04.781181] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5b510) 00:21:39.301 [2024-10-01 06:14:04.781216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:5992 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.301 [2024-10-01 06:14:04.781245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:39.301 [2024-10-01 06:14:04.796032] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5b510) 00:21:39.301 [2024-10-01 06:14:04.796082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:3874 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.301 [2024-10-01 06:14:04.796095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:39.301 [2024-10-01 06:14:04.810035] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5b510) 00:21:39.301 [2024-10-01 06:14:04.810081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:15134 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.301 [2024-10-01 06:14:04.810092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:39.301 [2024-10-01 06:14:04.824015] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5b510) 00:21:39.301 [2024-10-01 06:14:04.824062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19525 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.301 [2024-10-01 06:14:04.824074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:39.301 [2024-10-01 06:14:04.838495] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5b510) 00:21:39.301 [2024-10-01 06:14:04.838542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:1782 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.301 [2024-10-01 06:14:04.838553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:39.301 [2024-10-01 06:14:04.853572] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5b510) 00:21:39.301 [2024-10-01 06:14:04.853635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:13219 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.301 [2024-10-01 06:14:04.853646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:39.301 [2024-10-01 06:14:04.868410] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5b510) 00:21:39.301 [2024-10-01 06:14:04.868457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:20156 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.301 [2024-10-01 06:14:04.868469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:39.301 [2024-10-01 06:14:04.883233] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5b510) 00:21:39.301 [2024-10-01 06:14:04.883280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:12613 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.301 [2024-10-01 06:14:04.883291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:39.301 [2024-10-01 06:14:04.897947] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5b510) 00:21:39.301 [2024-10-01 06:14:04.897995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18033 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.301 [2024-10-01 06:14:04.898006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:39.301 [2024-10-01 06:14:04.912049] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5b510) 00:21:39.301 [2024-10-01 06:14:04.912100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6586 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.301 [2024-10-01 06:14:04.912114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:39.562 [2024-10-01 06:14:04.927172] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5b510) 00:21:39.562 [2024-10-01 06:14:04.927218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6067 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.562 [2024-10-01 06:14:04.927229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:39.562 [2024-10-01 06:14:04.941231] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5b510) 00:21:39.562 [2024-10-01 06:14:04.941293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11026 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.562 [2024-10-01 06:14:04.941304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:39.562 [2024-10-01 06:14:04.955443] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5b510) 00:21:39.562 [2024-10-01 06:14:04.955488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9788 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.562 [2024-10-01 06:14:04.955499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:39.562 [2024-10-01 06:14:04.969420] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5b510) 00:21:39.562 [2024-10-01 06:14:04.969466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24773 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.562 [2024-10-01 06:14:04.969477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:39.562 [2024-10-01 06:14:04.983373] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5b510) 00:21:39.562 [2024-10-01 06:14:04.983419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20287 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.562 [2024-10-01 06:14:04.983430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:39.562 [2024-10-01 06:14:04.997361] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5b510) 00:21:39.562 [2024-10-01 06:14:04.997407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7846 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.562 [2024-10-01 06:14:04.997418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:39.562 [2024-10-01 06:14:05.017236] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5b510) 00:21:39.562 [2024-10-01 06:14:05.017282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18409 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.562 [2024-10-01 06:14:05.017294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:39.562 [2024-10-01 06:14:05.031071] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5b510) 00:21:39.562 [2024-10-01 06:14:05.031118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17108 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.562 [2024-10-01 06:14:05.031129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:39.562 [2024-10-01 06:14:05.045039] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5b510) 00:21:39.562 [2024-10-01 06:14:05.045086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25289 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.562 [2024-10-01 06:14:05.045097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:39.562 [2024-10-01 06:14:05.058886] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5b510) 00:21:39.562 [2024-10-01 06:14:05.058939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:680 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.562 [2024-10-01 06:14:05.058951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:39.562 [2024-10-01 06:14:05.072812] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5b510) 00:21:39.562 [2024-10-01 06:14:05.072860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20542 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.562 [2024-10-01 06:14:05.072871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:39.562 [2024-10-01 06:14:05.086637] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5b510) 00:21:39.562 [2024-10-01 06:14:05.086683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17063 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.562 [2024-10-01 06:14:05.086694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:39.562 [2024-10-01 06:14:05.100709] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5b510) 00:21:39.562 [2024-10-01 06:14:05.100756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7663 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.562 [2024-10-01 06:14:05.100767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:39.562 [2024-10-01 06:14:05.114564] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5b510) 00:21:39.562 [2024-10-01 06:14:05.114610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6511 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.562 [2024-10-01 06:14:05.114621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:39.562 [2024-10-01 06:14:05.128562] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1a5b510) 00:21:39.562 [2024-10-01 06:14:05.128608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:16687 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:39.562 [2024-10-01 06:14:05.128619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:39.562 17394.50 IOPS, 67.95 MiB/s 00:21:39.562 Latency(us) 00:21:39.562 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:39.562 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:21:39.562 nvme0n1 : 2.01 17406.65 67.99 0.00 0.00 7348.10 6642.97 29789.09 00:21:39.562 =================================================================================================================== 00:21:39.563 Total : 17406.65 67.99 0.00 0.00 7348.10 6642.97 29789.09 00:21:39.563 { 00:21:39.563 "results": [ 00:21:39.563 { 00:21:39.563 "job": "nvme0n1", 00:21:39.563 "core_mask": "0x2", 00:21:39.563 "workload": "randread", 00:21:39.563 "status": "finished", 00:21:39.563 "queue_depth": 128, 00:21:39.563 "io_size": 4096, 00:21:39.563 "runtime": 2.005958, 00:21:39.563 "iops": 17406.64560274941, 00:21:39.563 "mibps": 67.99470938573988, 00:21:39.563 "io_failed": 0, 00:21:39.563 "io_timeout": 0, 00:21:39.563 "avg_latency_us": 7348.099462882107, 00:21:39.563 "min_latency_us": 6642.967272727273, 00:21:39.563 "max_latency_us": 29789.090909090908 00:21:39.563 } 00:21:39.563 ], 00:21:39.563 "core_count": 1 00:21:39.563 } 00:21:39.563 06:14:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:21:39.563 06:14:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:21:39.563 06:14:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:21:39.563 06:14:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:21:39.563 | .driver_specific 00:21:39.563 | .nvme_error 00:21:39.563 | .status_code 00:21:39.563 | .command_transient_transport_error' 00:21:39.822 06:14:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 136 > 0 )) 00:21:39.822 06:14:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 94514 00:21:39.822 06:14:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 94514 ']' 00:21:39.822 06:14:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 94514 00:21:40.084 06:14:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:21:40.084 06:14:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:40.084 06:14:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 94514 00:21:40.084 06:14:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:21:40.084 06:14:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:21:40.084 killing process with pid 94514 00:21:40.084 06:14:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 94514' 00:21:40.084 Received shutdown signal, test time was about 2.000000 seconds 00:21:40.084 00:21:40.084 Latency(us) 00:21:40.084 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:40.084 =================================================================================================================== 00:21:40.084 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:40.084 06:14:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 94514 00:21:40.084 06:14:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 94514 00:21:40.084 06:14:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:21:40.084 06:14:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:21:40.084 06:14:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:21:40.084 06:14:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:21:40.084 06:14:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:21:40.084 06:14:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:21:40.084 06:14:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=94574 00:21:40.084 06:14:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 94574 /var/tmp/bperf.sock 00:21:40.084 06:14:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 94574 ']' 00:21:40.084 06:14:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:40.084 06:14:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:40.084 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:40.084 06:14:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:40.084 06:14:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:40.084 06:14:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:40.084 I/O size of 131072 is greater than zero copy threshold (65536). 00:21:40.084 Zero copy mechanism will not be used. 00:21:40.084 [2024-10-01 06:14:05.641981] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:21:40.084 [2024-10-01 06:14:05.642063] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94574 ] 00:21:40.342 [2024-10-01 06:14:05.769721] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:40.342 [2024-10-01 06:14:05.806009] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:21:40.342 [2024-10-01 06:14:05.834823] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:40.342 06:14:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:40.342 06:14:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:21:40.342 06:14:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:21:40.342 06:14:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:21:40.607 06:14:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:21:40.607 06:14:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:40.607 06:14:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:40.607 06:14:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:40.607 06:14:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:40.607 06:14:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:41.177 nvme0n1 00:21:41.177 06:14:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:21:41.177 06:14:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:41.177 06:14:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:41.177 06:14:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:41.177 06:14:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:21:41.177 06:14:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:21:41.177 I/O size of 131072 is greater than zero copy threshold (65536). 00:21:41.177 Zero copy mechanism will not be used. 00:21:41.177 Running I/O for 2 seconds... 00:21:41.177 [2024-10-01 06:14:06.637262] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:41.177 [2024-10-01 06:14:06.637320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.177 [2024-10-01 06:14:06.637334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.177 [2024-10-01 06:14:06.641415] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:41.177 [2024-10-01 06:14:06.641466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.177 [2024-10-01 06:14:06.641478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.177 [2024-10-01 06:14:06.645449] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:41.177 [2024-10-01 06:14:06.645499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.177 [2024-10-01 06:14:06.645511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.177 [2024-10-01 06:14:06.649475] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:41.177 [2024-10-01 06:14:06.649524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.177 [2024-10-01 06:14:06.649536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.177 [2024-10-01 06:14:06.653529] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:41.177 [2024-10-01 06:14:06.653579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.177 [2024-10-01 06:14:06.653590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.177 [2024-10-01 06:14:06.657637] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:41.177 [2024-10-01 06:14:06.657687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.177 [2024-10-01 06:14:06.657698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.177 [2024-10-01 06:14:06.661560] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:41.177 [2024-10-01 06:14:06.661609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.177 [2024-10-01 06:14:06.661621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.177 [2024-10-01 06:14:06.665477] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:41.177 [2024-10-01 06:14:06.665526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.177 [2024-10-01 06:14:06.665538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.177 [2024-10-01 06:14:06.669504] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:41.177 [2024-10-01 06:14:06.669546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.177 [2024-10-01 06:14:06.669563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.177 [2024-10-01 06:14:06.673467] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:41.177 [2024-10-01 06:14:06.673515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.177 [2024-10-01 06:14:06.673528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.177 [2024-10-01 06:14:06.677413] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:41.177 [2024-10-01 06:14:06.677461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.177 [2024-10-01 06:14:06.677473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.177 [2024-10-01 06:14:06.681357] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:41.177 [2024-10-01 06:14:06.681406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.177 [2024-10-01 06:14:06.681418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.177 [2024-10-01 06:14:06.685530] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:41.177 [2024-10-01 06:14:06.685579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.177 [2024-10-01 06:14:06.685591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.177 [2024-10-01 06:14:06.689509] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:41.177 [2024-10-01 06:14:06.689557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.177 [2024-10-01 06:14:06.689569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.178 [2024-10-01 06:14:06.693391] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:41.178 [2024-10-01 06:14:06.693439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.178 [2024-10-01 06:14:06.693452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.178 [2024-10-01 06:14:06.697406] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:41.178 [2024-10-01 06:14:06.697454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.178 [2024-10-01 06:14:06.697466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.178 [2024-10-01 06:14:06.701587] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:41.178 [2024-10-01 06:14:06.701637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.178 [2024-10-01 06:14:06.701649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.178 [2024-10-01 06:14:06.705529] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:41.178 [2024-10-01 06:14:06.705577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.178 [2024-10-01 06:14:06.705588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.178 [2024-10-01 06:14:06.709456] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:41.178 [2024-10-01 06:14:06.709504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.178 [2024-10-01 06:14:06.709516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.178 [2024-10-01 06:14:06.713409] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:41.178 [2024-10-01 06:14:06.713457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.178 [2024-10-01 06:14:06.713468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.178 [2024-10-01 06:14:06.717276] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:41.178 [2024-10-01 06:14:06.717324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.178 [2024-10-01 06:14:06.717335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.178 [2024-10-01 06:14:06.721048] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:41.178 [2024-10-01 06:14:06.721095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.178 [2024-10-01 06:14:06.721107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.178 [2024-10-01 06:14:06.724749] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:41.178 [2024-10-01 06:14:06.724796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.178 [2024-10-01 06:14:06.724807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.178 [2024-10-01 06:14:06.728614] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:41.178 [2024-10-01 06:14:06.728661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.178 [2024-10-01 06:14:06.728673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.178 [2024-10-01 06:14:06.732404] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:41.178 [2024-10-01 06:14:06.732451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.178 [2024-10-01 06:14:06.732462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.178 [2024-10-01 06:14:06.736185] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:41.178 [2024-10-01 06:14:06.736235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.178 [2024-10-01 06:14:06.736248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.178 [2024-10-01 06:14:06.739792] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:41.178 [2024-10-01 06:14:06.739840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.178 [2024-10-01 06:14:06.739852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.178 [2024-10-01 06:14:06.743679] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:41.178 [2024-10-01 06:14:06.743726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.178 [2024-10-01 06:14:06.743738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.178 [2024-10-01 06:14:06.747448] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:41.178 [2024-10-01 06:14:06.747495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.178 [2024-10-01 06:14:06.747506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.178 [2024-10-01 06:14:06.751291] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:41.178 [2024-10-01 06:14:06.751339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.178 [2024-10-01 06:14:06.751350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.178 [2024-10-01 06:14:06.755074] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:41.178 [2024-10-01 06:14:06.755121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.178 [2024-10-01 06:14:06.755132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.178 [2024-10-01 06:14:06.758802] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:41.178 [2024-10-01 06:14:06.758851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.178 [2024-10-01 06:14:06.758863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.178 [2024-10-01 06:14:06.762606] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:41.178 [2024-10-01 06:14:06.762653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.178 [2024-10-01 06:14:06.762665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.178 [2024-10-01 06:14:06.766517] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:41.178 [2024-10-01 06:14:06.766566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.178 [2024-10-01 06:14:06.766578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.178 [2024-10-01 06:14:06.770431] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:41.178 [2024-10-01 06:14:06.770479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.178 [2024-10-01 06:14:06.770491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.178 [2024-10-01 06:14:06.774315] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:41.178 [2024-10-01 06:14:06.774364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.178 [2024-10-01 06:14:06.774377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.178 [2024-10-01 06:14:06.778204] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:41.178 [2024-10-01 06:14:06.778252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.178 [2024-10-01 06:14:06.778265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.178 [2024-10-01 06:14:06.782105] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:41.178 [2024-10-01 06:14:06.782153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.178 [2024-10-01 06:14:06.782165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.178 [2024-10-01 06:14:06.786061] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:41.178 [2024-10-01 06:14:06.786125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.178 [2024-10-01 06:14:06.786138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.178 [2024-10-01 06:14:06.790335] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:41.178 [2024-10-01 06:14:06.790383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.178 [2024-10-01 06:14:06.790395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.440 [2024-10-01 06:14:06.794410] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:41.440 [2024-10-01 06:14:06.794458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.440 [2024-10-01 06:14:06.794469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.440 [2024-10-01 06:14:06.798568] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:41.440 [2024-10-01 06:14:06.798616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.440 [2024-10-01 06:14:06.798628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.440 [2024-10-01 06:14:06.802866] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:41.440 [2024-10-01 06:14:06.802954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.440 [2024-10-01 06:14:06.802967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.440 [2024-10-01 06:14:06.807026] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:41.440 [2024-10-01 06:14:06.807074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.440 [2024-10-01 06:14:06.807087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.440 [2024-10-01 06:14:06.811317] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:41.440 [2024-10-01 06:14:06.811365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.440 [2024-10-01 06:14:06.811377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.440 [2024-10-01 06:14:06.815675] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:41.440 [2024-10-01 06:14:06.815725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.440 [2024-10-01 06:14:06.815749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.440 [2024-10-01 06:14:06.820365] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:41.440 [2024-10-01 06:14:06.820415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.440 [2024-10-01 06:14:06.820427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.440 [2024-10-01 06:14:06.824795] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:41.440 [2024-10-01 06:14:06.824843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.440 [2024-10-01 06:14:06.824855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.440 [2024-10-01 06:14:06.828962] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:41.440 [2024-10-01 06:14:06.829023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.440 [2024-10-01 06:14:06.829036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.440 [2024-10-01 06:14:06.833135] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:41.440 [2024-10-01 06:14:06.833186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.440 [2024-10-01 06:14:06.833199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.440 [2024-10-01 06:14:06.837358] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:41.440 [2024-10-01 06:14:06.837404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.440 [2024-10-01 06:14:06.837417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.440 [2024-10-01 06:14:06.841508] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:41.440 [2024-10-01 06:14:06.841556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.440 [2024-10-01 06:14:06.841567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.440 [2024-10-01 06:14:06.845681] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:41.440 [2024-10-01 06:14:06.845728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.440 [2024-10-01 06:14:06.845740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.440 [2024-10-01 06:14:06.849634] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:41.440 [2024-10-01 06:14:06.849681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.440 [2024-10-01 06:14:06.849692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.440 [2024-10-01 06:14:06.853726] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:41.440 [2024-10-01 06:14:06.853789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.440 [2024-10-01 06:14:06.853801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.441 [2024-10-01 06:14:06.857678] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:41.441 [2024-10-01 06:14:06.857725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.441 [2024-10-01 06:14:06.857736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.441 [2024-10-01 06:14:06.861525] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:41.441 [2024-10-01 06:14:06.861572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.441 [2024-10-01 06:14:06.861585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.441 [2024-10-01 06:14:06.865510] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:41.441 [2024-10-01 06:14:06.865557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.441 [2024-10-01 06:14:06.865568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.441 [2024-10-01 06:14:06.869332] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:41.441 [2024-10-01 06:14:06.869379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.441 [2024-10-01 06:14:06.869406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.441 [2024-10-01 06:14:06.873417] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:41.441 [2024-10-01 06:14:06.873466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.441 [2024-10-01 06:14:06.873478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.441 [2024-10-01 06:14:06.877365] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:41.441 [2024-10-01 06:14:06.877413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.441 [2024-10-01 06:14:06.877425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.441 [2024-10-01 06:14:06.881352] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:41.441 [2024-10-01 06:14:06.881400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.441 [2024-10-01 06:14:06.881412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.441 [2024-10-01 06:14:06.885276] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:41.441 [2024-10-01 06:14:06.885324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.441 [2024-10-01 06:14:06.885350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.441 [2024-10-01 06:14:06.889229] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:41.441 [2024-10-01 06:14:06.889277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.441 [2024-10-01 06:14:06.889289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.441 [2024-10-01 06:14:06.893172] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:41.441 [2024-10-01 06:14:06.893220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.441 [2024-10-01 06:14:06.893233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.441 [2024-10-01 06:14:06.897115] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:41.441 [2024-10-01 06:14:06.897164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.441 [2024-10-01 06:14:06.897176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.441 [2024-10-01 06:14:06.901031] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:41.441 [2024-10-01 06:14:06.901079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.441 [2024-10-01 06:14:06.901092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.441 [2024-10-01 06:14:06.904948] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:41.441 [2024-10-01 06:14:06.905005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.441 [2024-10-01 06:14:06.905018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.441 [2024-10-01 06:14:06.908712] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:41.441 [2024-10-01 06:14:06.908760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.441 [2024-10-01 06:14:06.908771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.441 [2024-10-01 06:14:06.912517] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:41.441 [2024-10-01 06:14:06.912563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.441 [2024-10-01 06:14:06.912575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.441 [2024-10-01 06:14:06.916318] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:41.441 [2024-10-01 06:14:06.916364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.441 [2024-10-01 06:14:06.916376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.441 [2024-10-01 06:14:06.920004] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:41.441 [2024-10-01 06:14:06.920052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.441 [2024-10-01 06:14:06.920064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.441 [2024-10-01 06:14:06.923690] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:41.441 [2024-10-01 06:14:06.923738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.441 [2024-10-01 06:14:06.923749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.441 [2024-10-01 06:14:06.927488] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:41.441 [2024-10-01 06:14:06.927534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.441 [2024-10-01 06:14:06.927546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.441 [2024-10-01 06:14:06.931267] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:41.441 [2024-10-01 06:14:06.931313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.441 [2024-10-01 06:14:06.931325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.441 [2024-10-01 06:14:06.935053] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:41.441 [2024-10-01 06:14:06.935099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.441 [2024-10-01 06:14:06.935111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.441 [2024-10-01 06:14:06.938835] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:41.441 [2024-10-01 06:14:06.938883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.441 [2024-10-01 06:14:06.938894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.441 [2024-10-01 06:14:06.942617] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:41.441 [2024-10-01 06:14:06.942665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.441 [2024-10-01 06:14:06.942677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.441 [2024-10-01 06:14:06.946561] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:41.441 [2024-10-01 06:14:06.946608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.441 [2024-10-01 06:14:06.946620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.441 [2024-10-01 06:14:06.950435] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:41.441 [2024-10-01 06:14:06.950483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.441 [2024-10-01 06:14:06.950511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.441 [2024-10-01 06:14:06.954338] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:41.441 [2024-10-01 06:14:06.954384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.441 [2024-10-01 06:14:06.954396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.441 [2024-10-01 06:14:06.958113] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:41.441 [2024-10-01 06:14:06.958159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.441 [2024-10-01 06:14:06.958171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.442 [2024-10-01 06:14:06.961819] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:41.442 [2024-10-01 06:14:06.961867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.442 [2024-10-01 06:14:06.961878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.442 [2024-10-01 06:14:06.965536] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:41.442 [2024-10-01 06:14:06.965583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.442 [2024-10-01 06:14:06.965594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.442 [2024-10-01 06:14:06.969353] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:41.442 [2024-10-01 06:14:06.969400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.442 [2024-10-01 06:14:06.969411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.442 [2024-10-01 06:14:06.973178] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:41.442 [2024-10-01 06:14:06.973226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.442 [2024-10-01 06:14:06.973238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.442 [2024-10-01 06:14:06.976900] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:41.442 [2024-10-01 06:14:06.976973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.442 [2024-10-01 06:14:06.976985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.442 [2024-10-01 06:14:06.980637] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:41.442 [2024-10-01 06:14:06.980684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.442 [2024-10-01 06:14:06.980696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.442 [2024-10-01 06:14:06.984421] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:41.442 [2024-10-01 06:14:06.984468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.442 [2024-10-01 06:14:06.984479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.442 [2024-10-01 06:14:06.988150] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:41.442 [2024-10-01 06:14:06.988199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.442 [2024-10-01 06:14:06.988211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.442 [2024-10-01 06:14:06.991753] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:41.442 [2024-10-01 06:14:06.991800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.442 [2024-10-01 06:14:06.991812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.442 [2024-10-01 06:14:06.995516] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:41.442 [2024-10-01 06:14:06.995563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.442 [2024-10-01 06:14:06.995575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.442 [2024-10-01 06:14:06.999307] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:41.442 [2024-10-01 06:14:06.999354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.442 [2024-10-01 06:14:06.999365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.442 [2024-10-01 06:14:07.003152] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:41.442 [2024-10-01 06:14:07.003199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.442 [2024-10-01 06:14:07.003210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.442 [2024-10-01 06:14:07.006837] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:41.442 [2024-10-01 06:14:07.006884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.442 [2024-10-01 06:14:07.006895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.442 [2024-10-01 06:14:07.010634] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:41.442 [2024-10-01 06:14:07.010681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.442 [2024-10-01 06:14:07.010693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.442 [2024-10-01 06:14:07.014499] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:41.442 [2024-10-01 06:14:07.014547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.442 [2024-10-01 06:14:07.014560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.442 [2024-10-01 06:14:07.018440] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:41.442 [2024-10-01 06:14:07.018487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.442 [2024-10-01 06:14:07.018499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.442 [2024-10-01 06:14:07.022329] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:41.442 [2024-10-01 06:14:07.022377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.442 [2024-10-01 06:14:07.022389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.442 [2024-10-01 06:14:07.026127] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:41.442 [2024-10-01 06:14:07.026175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.442 [2024-10-01 06:14:07.026186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.442 [2024-10-01 06:14:07.029961] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:41.442 [2024-10-01 06:14:07.029991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.442 [2024-10-01 06:14:07.030002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.442 [2024-10-01 06:14:07.033721] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:41.442 [2024-10-01 06:14:07.033769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.442 [2024-10-01 06:14:07.033781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.442 [2024-10-01 06:14:07.037615] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:41.442 [2024-10-01 06:14:07.037662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.442 [2024-10-01 06:14:07.037674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.442 [2024-10-01 06:14:07.041517] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:41.442 [2024-10-01 06:14:07.041564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.442 [2024-10-01 06:14:07.041575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.442 [2024-10-01 06:14:07.045484] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:41.442 [2024-10-01 06:14:07.045531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.442 [2024-10-01 06:14:07.045543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.442 [2024-10-01 06:14:07.049410] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:41.442 [2024-10-01 06:14:07.049459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.442 [2024-10-01 06:14:07.049472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.703 [2024-10-01 06:14:07.053677] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:41.703 [2024-10-01 06:14:07.053727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.703 [2024-10-01 06:14:07.053739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.703 [2024-10-01 06:14:07.057800] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:41.703 [2024-10-01 06:14:07.057847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.703 [2024-10-01 06:14:07.057858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.703 [2024-10-01 06:14:07.061873] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:41.703 [2024-10-01 06:14:07.061935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.703 [2024-10-01 06:14:07.061949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.703 [2024-10-01 06:14:07.065717] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:41.703 [2024-10-01 06:14:07.065773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.703 [2024-10-01 06:14:07.065786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.703 [2024-10-01 06:14:07.069629] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:41.703 [2024-10-01 06:14:07.069677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.703 [2024-10-01 06:14:07.069688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.703 [2024-10-01 06:14:07.073542] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:41.704 [2024-10-01 06:14:07.073588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.704 [2024-10-01 06:14:07.073600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.704 [2024-10-01 06:14:07.077453] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:41.704 [2024-10-01 06:14:07.077500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.704 [2024-10-01 06:14:07.077511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.704 [2024-10-01 06:14:07.081407] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:41.704 [2024-10-01 06:14:07.081453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.704 [2024-10-01 06:14:07.081465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.704 [2024-10-01 06:14:07.085205] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:41.704 [2024-10-01 06:14:07.085252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.704 [2024-10-01 06:14:07.085264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.704 [2024-10-01 06:14:07.089018] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:41.704 [2024-10-01 06:14:07.089066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.704 [2024-10-01 06:14:07.089078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.704 [2024-10-01 06:14:07.092735] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:41.704 [2024-10-01 06:14:07.092782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.704 [2024-10-01 06:14:07.092793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.704 [2024-10-01 06:14:07.096633] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:41.704 [2024-10-01 06:14:07.096680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.704 [2024-10-01 06:14:07.096691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.704 [2024-10-01 06:14:07.100391] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:41.704 [2024-10-01 06:14:07.100437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.704 [2024-10-01 06:14:07.100449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.704 [2024-10-01 06:14:07.104167] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:41.704 [2024-10-01 06:14:07.104215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.704 [2024-10-01 06:14:07.104241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.704 [2024-10-01 06:14:07.107751] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:41.704 [2024-10-01 06:14:07.107799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.704 [2024-10-01 06:14:07.107810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.704 [2024-10-01 06:14:07.111604] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:41.704 [2024-10-01 06:14:07.111650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.704 [2024-10-01 06:14:07.111662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.704 [2024-10-01 06:14:07.115394] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:41.704 [2024-10-01 06:14:07.115441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.704 [2024-10-01 06:14:07.115453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.704 [2024-10-01 06:14:07.119200] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:41.704 [2024-10-01 06:14:07.119247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.704 [2024-10-01 06:14:07.119258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.704 [2024-10-01 06:14:07.122977] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:41.704 [2024-10-01 06:14:07.123024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.704 [2024-10-01 06:14:07.123036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.704 [2024-10-01 06:14:07.126620] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:41.704 [2024-10-01 06:14:07.126667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.704 [2024-10-01 06:14:07.126678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.704 [2024-10-01 06:14:07.130496] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:41.704 [2024-10-01 06:14:07.130544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.704 [2024-10-01 06:14:07.130555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.704 [2024-10-01 06:14:07.134318] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:41.704 [2024-10-01 06:14:07.134365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.704 [2024-10-01 06:14:07.134376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.704 [2024-10-01 06:14:07.138170] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:41.704 [2024-10-01 06:14:07.138216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.704 [2024-10-01 06:14:07.138227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.704 [2024-10-01 06:14:07.142156] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:41.704 [2024-10-01 06:14:07.142204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.704 [2024-10-01 06:14:07.142215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.704 [2024-10-01 06:14:07.146098] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:41.704 [2024-10-01 06:14:07.146145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.704 [2024-10-01 06:14:07.146157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.704 [2024-10-01 06:14:07.150059] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:41.704 [2024-10-01 06:14:07.150108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.704 [2024-10-01 06:14:07.150119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.704 [2024-10-01 06:14:07.154093] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:41.704 [2024-10-01 06:14:07.154141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.704 [2024-10-01 06:14:07.154153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.704 [2024-10-01 06:14:07.158066] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:41.704 [2024-10-01 06:14:07.158100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.704 [2024-10-01 06:14:07.158112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.704 [2024-10-01 06:14:07.161937] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:41.704 [2024-10-01 06:14:07.161984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.704 [2024-10-01 06:14:07.161996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.704 [2024-10-01 06:14:07.165934] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:41.704 [2024-10-01 06:14:07.165982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.704 [2024-10-01 06:14:07.165994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.704 [2024-10-01 06:14:07.169776] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:41.704 [2024-10-01 06:14:07.169823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.704 [2024-10-01 06:14:07.169835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.704 [2024-10-01 06:14:07.173647] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:41.704 [2024-10-01 06:14:07.173695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.704 [2024-10-01 06:14:07.173706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.704 [2024-10-01 06:14:07.177416] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:41.705 [2024-10-01 06:14:07.177463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.705 [2024-10-01 06:14:07.177475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.705 [2024-10-01 06:14:07.181206] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:41.705 [2024-10-01 06:14:07.181254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.705 [2024-10-01 06:14:07.181266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.705 [2024-10-01 06:14:07.184967] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:41.705 [2024-10-01 06:14:07.185014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.705 [2024-10-01 06:14:07.185025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.705 [2024-10-01 06:14:07.188722] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:41.705 [2024-10-01 06:14:07.188769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.705 [2024-10-01 06:14:07.188781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.705 [2024-10-01 06:14:07.192518] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:41.705 [2024-10-01 06:14:07.192565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.705 [2024-10-01 06:14:07.192577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.705 [2024-10-01 06:14:07.196345] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:41.705 [2024-10-01 06:14:07.196392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.705 [2024-10-01 06:14:07.196404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.705 [2024-10-01 06:14:07.200093] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:41.705 [2024-10-01 06:14:07.200142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.705 [2024-10-01 06:14:07.200155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.705 [2024-10-01 06:14:07.203787] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:41.705 [2024-10-01 06:14:07.203834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.705 [2024-10-01 06:14:07.203846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.705 [2024-10-01 06:14:07.207709] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:41.705 [2024-10-01 06:14:07.207756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.705 [2024-10-01 06:14:07.207768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.705 [2024-10-01 06:14:07.211559] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:41.705 [2024-10-01 06:14:07.211606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.705 [2024-10-01 06:14:07.211618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.705 [2024-10-01 06:14:07.215348] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:41.705 [2024-10-01 06:14:07.215396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.705 [2024-10-01 06:14:07.215407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.705 [2024-10-01 06:14:07.219096] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:41.705 [2024-10-01 06:14:07.219143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.705 [2024-10-01 06:14:07.219154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.705 [2024-10-01 06:14:07.222900] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:41.705 [2024-10-01 06:14:07.222956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.705 [2024-10-01 06:14:07.222968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.705 [2024-10-01 06:14:07.226692] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:41.705 [2024-10-01 06:14:07.226738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.705 [2024-10-01 06:14:07.226749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.705 [2024-10-01 06:14:07.230494] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:41.705 [2024-10-01 06:14:07.230541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.705 [2024-10-01 06:14:07.230552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.705 [2024-10-01 06:14:07.234284] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:41.705 [2024-10-01 06:14:07.234347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.705 [2024-10-01 06:14:07.234357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.705 [2024-10-01 06:14:07.238130] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:41.705 [2024-10-01 06:14:07.238178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.705 [2024-10-01 06:14:07.238189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.705 [2024-10-01 06:14:07.241883] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:41.705 [2024-10-01 06:14:07.241954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.705 [2024-10-01 06:14:07.241967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.705 [2024-10-01 06:14:07.245694] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:41.705 [2024-10-01 06:14:07.245741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.705 [2024-10-01 06:14:07.245752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.705 [2024-10-01 06:14:07.249597] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:41.705 [2024-10-01 06:14:07.249645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.705 [2024-10-01 06:14:07.249657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.705 [2024-10-01 06:14:07.253535] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:41.705 [2024-10-01 06:14:07.253582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.705 [2024-10-01 06:14:07.253594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.705 [2024-10-01 06:14:07.257329] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:41.705 [2024-10-01 06:14:07.257377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.705 [2024-10-01 06:14:07.257388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.705 [2024-10-01 06:14:07.261047] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:41.705 [2024-10-01 06:14:07.261092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.706 [2024-10-01 06:14:07.261104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.706 [2024-10-01 06:14:07.264757] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:41.706 [2024-10-01 06:14:07.264804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.706 [2024-10-01 06:14:07.264815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.706 [2024-10-01 06:14:07.268522] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:41.706 [2024-10-01 06:14:07.268568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.706 [2024-10-01 06:14:07.268580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.706 [2024-10-01 06:14:07.272319] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:41.706 [2024-10-01 06:14:07.272366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.706 [2024-10-01 06:14:07.272377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.706 [2024-10-01 06:14:07.276101] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:41.706 [2024-10-01 06:14:07.276151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.706 [2024-10-01 06:14:07.276164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.706 [2024-10-01 06:14:07.279948] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:41.706 [2024-10-01 06:14:07.279996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.706 [2024-10-01 06:14:07.280008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.706 [2024-10-01 06:14:07.283704] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:41.706 [2024-10-01 06:14:07.283751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.706 [2024-10-01 06:14:07.283763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.706 [2024-10-01 06:14:07.287490] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:41.706 [2024-10-01 06:14:07.287538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.706 [2024-10-01 06:14:07.287549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.706 [2024-10-01 06:14:07.291255] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:41.706 [2024-10-01 06:14:07.291302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.706 [2024-10-01 06:14:07.291314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.706 [2024-10-01 06:14:07.295010] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:41.706 [2024-10-01 06:14:07.295056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.706 [2024-10-01 06:14:07.295067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.706 [2024-10-01 06:14:07.298686] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:41.706 [2024-10-01 06:14:07.298734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.706 [2024-10-01 06:14:07.298745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.706 [2024-10-01 06:14:07.302507] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:41.706 [2024-10-01 06:14:07.302555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.706 [2024-10-01 06:14:07.302566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.706 [2024-10-01 06:14:07.306356] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:41.706 [2024-10-01 06:14:07.306403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.706 [2024-10-01 06:14:07.306414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.706 [2024-10-01 06:14:07.310133] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:41.706 [2024-10-01 06:14:07.310181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.706 [2024-10-01 06:14:07.310193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.706 [2024-10-01 06:14:07.313980] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:41.706 [2024-10-01 06:14:07.314054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.706 [2024-10-01 06:14:07.314081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.968 [2024-10-01 06:14:07.318145] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:41.968 [2024-10-01 06:14:07.318194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.968 [2024-10-01 06:14:07.318221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.968 [2024-10-01 06:14:07.322053] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:41.968 [2024-10-01 06:14:07.322100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.968 [2024-10-01 06:14:07.322112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.968 [2024-10-01 06:14:07.326200] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:41.968 [2024-10-01 06:14:07.326249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.968 [2024-10-01 06:14:07.326261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.968 [2024-10-01 06:14:07.330013] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:41.968 [2024-10-01 06:14:07.330060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.968 [2024-10-01 06:14:07.330072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.968 [2024-10-01 06:14:07.333792] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:41.968 [2024-10-01 06:14:07.333839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.968 [2024-10-01 06:14:07.333850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.968 [2024-10-01 06:14:07.337736] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:41.968 [2024-10-01 06:14:07.337783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.968 [2024-10-01 06:14:07.337794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.968 [2024-10-01 06:14:07.341572] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:41.968 [2024-10-01 06:14:07.341619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.968 [2024-10-01 06:14:07.341631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.968 [2024-10-01 06:14:07.345557] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:41.968 [2024-10-01 06:14:07.345604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.968 [2024-10-01 06:14:07.345615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.968 [2024-10-01 06:14:07.349335] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:41.968 [2024-10-01 06:14:07.349382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.968 [2024-10-01 06:14:07.349393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.968 [2024-10-01 06:14:07.353265] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:41.968 [2024-10-01 06:14:07.353312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.968 [2024-10-01 06:14:07.353323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.968 [2024-10-01 06:14:07.357047] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:41.968 [2024-10-01 06:14:07.357093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.968 [2024-10-01 06:14:07.357104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.968 [2024-10-01 06:14:07.360745] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:41.968 [2024-10-01 06:14:07.360792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.968 [2024-10-01 06:14:07.360804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.968 [2024-10-01 06:14:07.364572] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:41.968 [2024-10-01 06:14:07.364619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.968 [2024-10-01 06:14:07.364631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.968 [2024-10-01 06:14:07.368423] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:41.968 [2024-10-01 06:14:07.368470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.968 [2024-10-01 06:14:07.368483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.968 [2024-10-01 06:14:07.372177] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:41.968 [2024-10-01 06:14:07.372225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.968 [2024-10-01 06:14:07.372251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.968 [2024-10-01 06:14:07.375832] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:41.968 [2024-10-01 06:14:07.375878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.968 [2024-10-01 06:14:07.375940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.968 [2024-10-01 06:14:07.379595] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:41.968 [2024-10-01 06:14:07.379642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.968 [2024-10-01 06:14:07.379654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.968 [2024-10-01 06:14:07.383442] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:41.968 [2024-10-01 06:14:07.383489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.968 [2024-10-01 06:14:07.383500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.968 [2024-10-01 06:14:07.387207] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:41.968 [2024-10-01 06:14:07.387253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.968 [2024-10-01 06:14:07.387265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.968 [2024-10-01 06:14:07.390932] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:41.968 [2024-10-01 06:14:07.390978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.968 [2024-10-01 06:14:07.390990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.968 [2024-10-01 06:14:07.394692] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:41.968 [2024-10-01 06:14:07.394738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.969 [2024-10-01 06:14:07.394750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.969 [2024-10-01 06:14:07.398547] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:41.969 [2024-10-01 06:14:07.398595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.969 [2024-10-01 06:14:07.398606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.969 [2024-10-01 06:14:07.402357] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:41.969 [2024-10-01 06:14:07.402404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.969 [2024-10-01 06:14:07.402416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.969 [2024-10-01 06:14:07.406152] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:41.969 [2024-10-01 06:14:07.406200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.969 [2024-10-01 06:14:07.406211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.969 [2024-10-01 06:14:07.409921] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:41.969 [2024-10-01 06:14:07.409968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.969 [2024-10-01 06:14:07.409980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.969 [2024-10-01 06:14:07.413730] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:41.969 [2024-10-01 06:14:07.413778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.969 [2024-10-01 06:14:07.413789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.969 [2024-10-01 06:14:07.417598] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:41.969 [2024-10-01 06:14:07.417645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.969 [2024-10-01 06:14:07.417657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.969 [2024-10-01 06:14:07.421524] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:41.969 [2024-10-01 06:14:07.421571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.969 [2024-10-01 06:14:07.421582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.969 [2024-10-01 06:14:07.425336] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:41.969 [2024-10-01 06:14:07.425384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.969 [2024-10-01 06:14:07.425396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.969 [2024-10-01 06:14:07.429181] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:41.969 [2024-10-01 06:14:07.429227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.969 [2024-10-01 06:14:07.429238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.969 [2024-10-01 06:14:07.432897] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:41.969 [2024-10-01 06:14:07.432954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.969 [2024-10-01 06:14:07.432965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.969 [2024-10-01 06:14:07.436592] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:41.969 [2024-10-01 06:14:07.436639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.969 [2024-10-01 06:14:07.436651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.969 [2024-10-01 06:14:07.440413] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:41.969 [2024-10-01 06:14:07.440459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.969 [2024-10-01 06:14:07.440471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.969 [2024-10-01 06:14:07.444200] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:41.969 [2024-10-01 06:14:07.444263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.969 [2024-10-01 06:14:07.444288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.969 [2024-10-01 06:14:07.447992] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:41.969 [2024-10-01 06:14:07.448039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.969 [2024-10-01 06:14:07.448051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.969 [2024-10-01 06:14:07.451730] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:41.969 [2024-10-01 06:14:07.451777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.969 [2024-10-01 06:14:07.451789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.969 [2024-10-01 06:14:07.455558] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:41.969 [2024-10-01 06:14:07.455604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.969 [2024-10-01 06:14:07.455615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.969 [2024-10-01 06:14:07.459371] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:41.969 [2024-10-01 06:14:07.459418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.969 [2024-10-01 06:14:07.459429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.969 [2024-10-01 06:14:07.463191] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:41.969 [2024-10-01 06:14:07.463238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.969 [2024-10-01 06:14:07.463250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.969 [2024-10-01 06:14:07.466923] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:41.969 [2024-10-01 06:14:07.466969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.969 [2024-10-01 06:14:07.466980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.969 [2024-10-01 06:14:07.470679] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:41.969 [2024-10-01 06:14:07.470726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.969 [2024-10-01 06:14:07.470737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.969 [2024-10-01 06:14:07.474511] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:41.969 [2024-10-01 06:14:07.474559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.969 [2024-10-01 06:14:07.474571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.969 [2024-10-01 06:14:07.478378] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:41.969 [2024-10-01 06:14:07.478425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.969 [2024-10-01 06:14:07.478436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.969 [2024-10-01 06:14:07.482116] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:41.969 [2024-10-01 06:14:07.482163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.969 [2024-10-01 06:14:07.482175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.969 [2024-10-01 06:14:07.485846] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:41.969 [2024-10-01 06:14:07.485893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.969 [2024-10-01 06:14:07.485919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.969 [2024-10-01 06:14:07.489724] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:41.969 [2024-10-01 06:14:07.489772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.969 [2024-10-01 06:14:07.489784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.969 [2024-10-01 06:14:07.493608] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:41.969 [2024-10-01 06:14:07.493656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.969 [2024-10-01 06:14:07.493668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.969 [2024-10-01 06:14:07.497481] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:41.969 [2024-10-01 06:14:07.497528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.970 [2024-10-01 06:14:07.497540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.970 [2024-10-01 06:14:07.501262] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:41.970 [2024-10-01 06:14:07.501309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.970 [2024-10-01 06:14:07.501320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.970 [2024-10-01 06:14:07.505000] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:41.970 [2024-10-01 06:14:07.505046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.970 [2024-10-01 06:14:07.505057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.970 [2024-10-01 06:14:07.508691] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:41.970 [2024-10-01 06:14:07.508738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.970 [2024-10-01 06:14:07.508750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.970 [2024-10-01 06:14:07.512503] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:41.970 [2024-10-01 06:14:07.512550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.970 [2024-10-01 06:14:07.512561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.970 [2024-10-01 06:14:07.516247] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:41.970 [2024-10-01 06:14:07.516310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.970 [2024-10-01 06:14:07.516321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.970 [2024-10-01 06:14:07.520048] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:41.970 [2024-10-01 06:14:07.520096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.970 [2024-10-01 06:14:07.520108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.970 [2024-10-01 06:14:07.523800] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:41.970 [2024-10-01 06:14:07.523846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.970 [2024-10-01 06:14:07.523857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.970 [2024-10-01 06:14:07.527651] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:41.970 [2024-10-01 06:14:07.527698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.970 [2024-10-01 06:14:07.527710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.970 [2024-10-01 06:14:07.531453] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:41.970 [2024-10-01 06:14:07.531499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.970 [2024-10-01 06:14:07.531511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.970 [2024-10-01 06:14:07.535251] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:41.970 [2024-10-01 06:14:07.535299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.970 [2024-10-01 06:14:07.535311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.970 [2024-10-01 06:14:07.538988] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:41.970 [2024-10-01 06:14:07.539034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.970 [2024-10-01 06:14:07.539046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.970 [2024-10-01 06:14:07.542728] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:41.970 [2024-10-01 06:14:07.542775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.970 [2024-10-01 06:14:07.542787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.970 [2024-10-01 06:14:07.546682] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:41.970 [2024-10-01 06:14:07.546730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.970 [2024-10-01 06:14:07.546741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.970 [2024-10-01 06:14:07.550507] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:41.970 [2024-10-01 06:14:07.550554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.970 [2024-10-01 06:14:07.550566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.970 [2024-10-01 06:14:07.554403] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:41.970 [2024-10-01 06:14:07.554451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.970 [2024-10-01 06:14:07.554463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.970 [2024-10-01 06:14:07.558137] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:41.970 [2024-10-01 06:14:07.558184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.970 [2024-10-01 06:14:07.558196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.970 [2024-10-01 06:14:07.561972] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:41.970 [2024-10-01 06:14:07.562019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.970 [2024-10-01 06:14:07.562031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:41.970 [2024-10-01 06:14:07.565814] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:41.970 [2024-10-01 06:14:07.565861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.970 [2024-10-01 06:14:07.565873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:41.970 [2024-10-01 06:14:07.569510] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:41.970 [2024-10-01 06:14:07.569557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.970 [2024-10-01 06:14:07.569569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:41.970 [2024-10-01 06:14:07.573345] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:41.970 [2024-10-01 06:14:07.573391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.970 [2024-10-01 06:14:07.573404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:41.970 [2024-10-01 06:14:07.577220] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:41.970 [2024-10-01 06:14:07.577282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:41.970 [2024-10-01 06:14:07.577309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:42.232 [2024-10-01 06:14:07.581544] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:42.232 [2024-10-01 06:14:07.581607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.232 [2024-10-01 06:14:07.581619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:42.232 [2024-10-01 06:14:07.585462] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:42.232 [2024-10-01 06:14:07.585509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.232 [2024-10-01 06:14:07.585521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:42.232 [2024-10-01 06:14:07.589540] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:42.232 [2024-10-01 06:14:07.589589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.232 [2024-10-01 06:14:07.589601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:42.232 [2024-10-01 06:14:07.593355] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:42.232 [2024-10-01 06:14:07.593403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.232 [2024-10-01 06:14:07.593414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:42.232 [2024-10-01 06:14:07.597129] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:42.232 [2024-10-01 06:14:07.597175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.232 [2024-10-01 06:14:07.597187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:42.232 [2024-10-01 06:14:07.600879] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:42.232 [2024-10-01 06:14:07.600936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.232 [2024-10-01 06:14:07.600947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:42.232 [2024-10-01 06:14:07.604657] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:42.232 [2024-10-01 06:14:07.604705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.232 [2024-10-01 06:14:07.604716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:42.232 [2024-10-01 06:14:07.608455] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:42.232 [2024-10-01 06:14:07.608502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.232 [2024-10-01 06:14:07.608514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:42.232 [2024-10-01 06:14:07.612319] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:42.232 [2024-10-01 06:14:07.612365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.232 [2024-10-01 06:14:07.612377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:42.232 [2024-10-01 06:14:07.616030] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:42.232 [2024-10-01 06:14:07.616079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.232 [2024-10-01 06:14:07.616091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:42.232 [2024-10-01 06:14:07.619777] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:42.232 [2024-10-01 06:14:07.619823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.232 [2024-10-01 06:14:07.619835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:42.232 [2024-10-01 06:14:07.623618] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:42.232 [2024-10-01 06:14:07.623665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.232 [2024-10-01 06:14:07.623676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:42.232 [2024-10-01 06:14:07.627441] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:42.232 [2024-10-01 06:14:07.627488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.232 [2024-10-01 06:14:07.627501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:42.232 [2024-10-01 06:14:07.631271] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:42.232 [2024-10-01 06:14:07.631318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.232 [2024-10-01 06:14:07.631329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:42.232 7998.00 IOPS, 999.75 MiB/s [2024-10-01 06:14:07.636217] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:42.232 [2024-10-01 06:14:07.636310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.232 [2024-10-01 06:14:07.636322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:42.232 [2024-10-01 06:14:07.639971] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:42.232 [2024-10-01 06:14:07.640022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.232 [2024-10-01 06:14:07.640034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:42.232 [2024-10-01 06:14:07.643709] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:42.232 [2024-10-01 06:14:07.643756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.232 [2024-10-01 06:14:07.643768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:42.232 [2024-10-01 06:14:07.647627] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:42.232 [2024-10-01 06:14:07.647674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.232 [2024-10-01 06:14:07.647685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:42.232 [2024-10-01 06:14:07.651607] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:42.232 [2024-10-01 06:14:07.651670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.232 [2024-10-01 06:14:07.651682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:42.232 [2024-10-01 06:14:07.655600] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:42.232 [2024-10-01 06:14:07.655646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.232 [2024-10-01 06:14:07.655658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:42.232 [2024-10-01 06:14:07.659399] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:42.232 [2024-10-01 06:14:07.659447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.232 [2024-10-01 06:14:07.659458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:42.232 [2024-10-01 06:14:07.663210] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:42.232 [2024-10-01 06:14:07.663257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.232 [2024-10-01 06:14:07.663268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:42.232 [2024-10-01 06:14:07.667022] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:42.232 [2024-10-01 06:14:07.667068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.232 [2024-10-01 06:14:07.667079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:42.232 [2024-10-01 06:14:07.670810] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:42.232 [2024-10-01 06:14:07.670857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.232 [2024-10-01 06:14:07.670869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:42.232 [2024-10-01 06:14:07.674551] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:42.232 [2024-10-01 06:14:07.674598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.232 [2024-10-01 06:14:07.674610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:42.232 [2024-10-01 06:14:07.678416] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:42.232 [2024-10-01 06:14:07.678464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.233 [2024-10-01 06:14:07.678475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:42.233 [2024-10-01 06:14:07.682220] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:42.233 [2024-10-01 06:14:07.682268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.233 [2024-10-01 06:14:07.682280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:42.233 [2024-10-01 06:14:07.686021] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:42.233 [2024-10-01 06:14:07.686068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.233 [2024-10-01 06:14:07.686080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:42.233 [2024-10-01 06:14:07.689738] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:42.233 [2024-10-01 06:14:07.689785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.233 [2024-10-01 06:14:07.689797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:42.233 [2024-10-01 06:14:07.693541] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:42.233 [2024-10-01 06:14:07.693590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.233 [2024-10-01 06:14:07.693601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:42.233 [2024-10-01 06:14:07.697484] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:42.233 [2024-10-01 06:14:07.697532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.233 [2024-10-01 06:14:07.697543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:42.233 [2024-10-01 06:14:07.701435] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:42.233 [2024-10-01 06:14:07.701483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.233 [2024-10-01 06:14:07.701494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:42.233 [2024-10-01 06:14:07.705722] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:42.233 [2024-10-01 06:14:07.705770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.233 [2024-10-01 06:14:07.705782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:42.233 [2024-10-01 06:14:07.709797] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:42.233 [2024-10-01 06:14:07.709845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.233 [2024-10-01 06:14:07.709856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:42.233 [2024-10-01 06:14:07.714023] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:42.233 [2024-10-01 06:14:07.714073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.233 [2024-10-01 06:14:07.714086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:42.233 [2024-10-01 06:14:07.718424] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:42.233 [2024-10-01 06:14:07.718472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.233 [2024-10-01 06:14:07.718484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:42.233 [2024-10-01 06:14:07.722778] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:42.233 [2024-10-01 06:14:07.722844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.233 [2024-10-01 06:14:07.722856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:42.233 [2024-10-01 06:14:07.727328] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:42.233 [2024-10-01 06:14:07.727390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.233 [2024-10-01 06:14:07.727416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:42.233 [2024-10-01 06:14:07.731530] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:42.233 [2024-10-01 06:14:07.731579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.233 [2024-10-01 06:14:07.731590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:42.233 [2024-10-01 06:14:07.735606] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:42.233 [2024-10-01 06:14:07.735653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.233 [2024-10-01 06:14:07.735665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:42.233 [2024-10-01 06:14:07.739784] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:42.233 [2024-10-01 06:14:07.739833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.233 [2024-10-01 06:14:07.739845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:42.233 [2024-10-01 06:14:07.743866] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:42.233 [2024-10-01 06:14:07.743953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.233 [2024-10-01 06:14:07.743968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:42.233 [2024-10-01 06:14:07.748129] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:42.233 [2024-10-01 06:14:07.748182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.233 [2024-10-01 06:14:07.748196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:42.233 [2024-10-01 06:14:07.752101] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:42.233 [2024-10-01 06:14:07.752153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.233 [2024-10-01 06:14:07.752182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:42.233 [2024-10-01 06:14:07.755979] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:42.233 [2024-10-01 06:14:07.756029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.233 [2024-10-01 06:14:07.756043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:42.233 [2024-10-01 06:14:07.759886] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:42.233 [2024-10-01 06:14:07.759998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.233 [2024-10-01 06:14:07.760018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:42.233 [2024-10-01 06:14:07.763864] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:42.233 [2024-10-01 06:14:07.763964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.233 [2024-10-01 06:14:07.763979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:42.233 [2024-10-01 06:14:07.767789] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:42.233 [2024-10-01 06:14:07.767837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.233 [2024-10-01 06:14:07.767850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:42.233 [2024-10-01 06:14:07.771787] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:42.233 [2024-10-01 06:14:07.771835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.233 [2024-10-01 06:14:07.771847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:42.233 [2024-10-01 06:14:07.775868] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:42.233 [2024-10-01 06:14:07.775953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.233 [2024-10-01 06:14:07.775966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:42.233 [2024-10-01 06:14:07.779729] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:42.233 [2024-10-01 06:14:07.779777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.233 [2024-10-01 06:14:07.779788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:42.233 [2024-10-01 06:14:07.783662] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:42.233 [2024-10-01 06:14:07.783699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.233 [2024-10-01 06:14:07.783728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:42.233 [2024-10-01 06:14:07.787631] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:42.233 [2024-10-01 06:14:07.787856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.234 [2024-10-01 06:14:07.788050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:42.234 [2024-10-01 06:14:07.792530] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:42.234 [2024-10-01 06:14:07.792754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.234 [2024-10-01 06:14:07.793029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:42.234 [2024-10-01 06:14:07.797060] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:42.234 [2024-10-01 06:14:07.797283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.234 [2024-10-01 06:14:07.797528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:42.234 [2024-10-01 06:14:07.801527] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:42.234 [2024-10-01 06:14:07.801566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.234 [2024-10-01 06:14:07.801594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:42.234 [2024-10-01 06:14:07.805675] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:42.234 [2024-10-01 06:14:07.805714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.234 [2024-10-01 06:14:07.805742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:42.234 [2024-10-01 06:14:07.809621] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:42.234 [2024-10-01 06:14:07.809657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.234 [2024-10-01 06:14:07.809687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:42.234 [2024-10-01 06:14:07.813651] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:42.234 [2024-10-01 06:14:07.813689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.234 [2024-10-01 06:14:07.813717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:42.234 [2024-10-01 06:14:07.817620] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:42.234 [2024-10-01 06:14:07.817657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.234 [2024-10-01 06:14:07.817685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:42.234 [2024-10-01 06:14:07.821797] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:42.234 [2024-10-01 06:14:07.821835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.234 [2024-10-01 06:14:07.821864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:42.234 [2024-10-01 06:14:07.826109] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:42.234 [2024-10-01 06:14:07.826147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.234 [2024-10-01 06:14:07.826177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:42.234 [2024-10-01 06:14:07.830329] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:42.234 [2024-10-01 06:14:07.830366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.234 [2024-10-01 06:14:07.830395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:42.234 [2024-10-01 06:14:07.834623] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:42.234 [2024-10-01 06:14:07.834661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.234 [2024-10-01 06:14:07.834690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:42.234 [2024-10-01 06:14:07.839146] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:42.234 [2024-10-01 06:14:07.839186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.234 [2024-10-01 06:14:07.839216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:42.234 [2024-10-01 06:14:07.844045] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:42.234 [2024-10-01 06:14:07.844089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.234 [2024-10-01 06:14:07.844105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:42.495 [2024-10-01 06:14:07.848730] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:42.495 [2024-10-01 06:14:07.848768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.495 [2024-10-01 06:14:07.848797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:42.495 [2024-10-01 06:14:07.853495] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:42.495 [2024-10-01 06:14:07.853552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.495 [2024-10-01 06:14:07.853582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:42.496 [2024-10-01 06:14:07.858124] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:42.496 [2024-10-01 06:14:07.858166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.496 [2024-10-01 06:14:07.858197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:42.496 [2024-10-01 06:14:07.862550] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:42.496 [2024-10-01 06:14:07.862589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.496 [2024-10-01 06:14:07.862618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:42.496 [2024-10-01 06:14:07.866812] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:42.496 [2024-10-01 06:14:07.866850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.496 [2024-10-01 06:14:07.866878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:42.496 [2024-10-01 06:14:07.871363] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:42.496 [2024-10-01 06:14:07.871403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.496 [2024-10-01 06:14:07.871434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:42.496 [2024-10-01 06:14:07.875406] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:42.496 [2024-10-01 06:14:07.875443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.496 [2024-10-01 06:14:07.875472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:42.496 [2024-10-01 06:14:07.879350] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:42.496 [2024-10-01 06:14:07.879387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.496 [2024-10-01 06:14:07.879415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:42.496 [2024-10-01 06:14:07.883293] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:42.496 [2024-10-01 06:14:07.883329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.496 [2024-10-01 06:14:07.883358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:42.496 [2024-10-01 06:14:07.887364] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:42.496 [2024-10-01 06:14:07.887400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.496 [2024-10-01 06:14:07.887428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:42.496 [2024-10-01 06:14:07.891345] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:42.496 [2024-10-01 06:14:07.891381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.496 [2024-10-01 06:14:07.891409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:42.496 [2024-10-01 06:14:07.895281] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:42.496 [2024-10-01 06:14:07.895317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.496 [2024-10-01 06:14:07.895345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:42.496 [2024-10-01 06:14:07.899050] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:42.496 [2024-10-01 06:14:07.899084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.496 [2024-10-01 06:14:07.899112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:42.496 [2024-10-01 06:14:07.902782] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:42.496 [2024-10-01 06:14:07.903027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.496 [2024-10-01 06:14:07.903051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:42.496 [2024-10-01 06:14:07.906860] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:42.496 [2024-10-01 06:14:07.907062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.496 [2024-10-01 06:14:07.907087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:42.496 [2024-10-01 06:14:07.910884] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:42.496 [2024-10-01 06:14:07.910930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.496 [2024-10-01 06:14:07.910958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:42.496 [2024-10-01 06:14:07.914610] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:42.496 [2024-10-01 06:14:07.914801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.496 [2024-10-01 06:14:07.914824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:42.496 [2024-10-01 06:14:07.918620] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:42.496 [2024-10-01 06:14:07.918810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.496 [2024-10-01 06:14:07.918833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:42.496 [2024-10-01 06:14:07.922734] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:42.496 [2024-10-01 06:14:07.922955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.496 [2024-10-01 06:14:07.922981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:42.496 [2024-10-01 06:14:07.926841] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:42.496 [2024-10-01 06:14:07.927066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.496 [2024-10-01 06:14:07.927090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:42.496 [2024-10-01 06:14:07.930797] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:42.496 [2024-10-01 06:14:07.931024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.496 [2024-10-01 06:14:07.931048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:42.496 [2024-10-01 06:14:07.934822] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:42.496 [2024-10-01 06:14:07.935022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.496 [2024-10-01 06:14:07.935048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:42.496 [2024-10-01 06:14:07.938775] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:42.496 [2024-10-01 06:14:07.938974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.496 [2024-10-01 06:14:07.938998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:42.496 [2024-10-01 06:14:07.942839] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:42.496 [2024-10-01 06:14:07.943063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.496 [2024-10-01 06:14:07.943088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:42.496 [2024-10-01 06:14:07.947001] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:42.496 [2024-10-01 06:14:07.947036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.496 [2024-10-01 06:14:07.947065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:42.496 [2024-10-01 06:14:07.950743] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:42.496 [2024-10-01 06:14:07.950967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.496 [2024-10-01 06:14:07.950992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:42.496 [2024-10-01 06:14:07.954869] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:42.496 [2024-10-01 06:14:07.955093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.496 [2024-10-01 06:14:07.955114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:42.496 [2024-10-01 06:14:07.958876] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:42.496 [2024-10-01 06:14:07.959076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.496 [2024-10-01 06:14:07.959099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:42.496 [2024-10-01 06:14:07.962851] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:42.496 [2024-10-01 06:14:07.963072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.496 [2024-10-01 06:14:07.963098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:42.497 [2024-10-01 06:14:07.966867] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:42.497 [2024-10-01 06:14:07.967093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.497 [2024-10-01 06:14:07.967117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:42.497 [2024-10-01 06:14:07.970780] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:42.497 [2024-10-01 06:14:07.970983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.497 [2024-10-01 06:14:07.971006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:42.497 [2024-10-01 06:14:07.974795] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:42.497 [2024-10-01 06:14:07.975020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.497 [2024-10-01 06:14:07.975044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:42.497 [2024-10-01 06:14:07.978921] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:42.497 [2024-10-01 06:14:07.979145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.497 [2024-10-01 06:14:07.979170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:42.497 [2024-10-01 06:14:07.982764] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:42.497 [2024-10-01 06:14:07.982973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.497 [2024-10-01 06:14:07.982996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:42.497 [2024-10-01 06:14:07.986819] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:42.497 [2024-10-01 06:14:07.987042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.497 [2024-10-01 06:14:07.987067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:42.497 [2024-10-01 06:14:07.990847] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:42.497 [2024-10-01 06:14:07.991070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.497 [2024-10-01 06:14:07.991094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:42.497 [2024-10-01 06:14:07.994881] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:42.497 [2024-10-01 06:14:07.995104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.497 [2024-10-01 06:14:07.995128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:42.497 [2024-10-01 06:14:07.998962] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:42.497 [2024-10-01 06:14:07.998998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.497 [2024-10-01 06:14:07.999026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:42.497 [2024-10-01 06:14:08.002595] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:42.497 [2024-10-01 06:14:08.002802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.497 [2024-10-01 06:14:08.002826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:42.497 [2024-10-01 06:14:08.006678] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:42.497 [2024-10-01 06:14:08.006864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.497 [2024-10-01 06:14:08.006887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:42.497 [2024-10-01 06:14:08.010632] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:42.497 [2024-10-01 06:14:08.010839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.497 [2024-10-01 06:14:08.010864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:42.497 [2024-10-01 06:14:08.014675] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:42.497 [2024-10-01 06:14:08.014863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.497 [2024-10-01 06:14:08.014886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:42.497 [2024-10-01 06:14:08.018622] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:42.497 [2024-10-01 06:14:08.018809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.497 [2024-10-01 06:14:08.018832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:42.497 [2024-10-01 06:14:08.022622] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:42.497 [2024-10-01 06:14:08.022814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.497 [2024-10-01 06:14:08.022837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:42.497 [2024-10-01 06:14:08.026665] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:42.497 [2024-10-01 06:14:08.026853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.497 [2024-10-01 06:14:08.026877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:42.497 [2024-10-01 06:14:08.030686] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:42.497 [2024-10-01 06:14:08.030874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.497 [2024-10-01 06:14:08.030899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:42.497 [2024-10-01 06:14:08.034786] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:42.497 [2024-10-01 06:14:08.034991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.497 [2024-10-01 06:14:08.035012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:42.497 [2024-10-01 06:14:08.038732] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:42.497 [2024-10-01 06:14:08.038953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.497 [2024-10-01 06:14:08.038978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:42.497 [2024-10-01 06:14:08.042691] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:42.497 [2024-10-01 06:14:08.042879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.497 [2024-10-01 06:14:08.042903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:42.497 [2024-10-01 06:14:08.046772] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:42.497 [2024-10-01 06:14:08.046972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.497 [2024-10-01 06:14:08.046997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:42.497 [2024-10-01 06:14:08.050884] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:42.497 [2024-10-01 06:14:08.050929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.497 [2024-10-01 06:14:08.050957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:42.497 [2024-10-01 06:14:08.054684] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:42.497 [2024-10-01 06:14:08.054878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.497 [2024-10-01 06:14:08.054895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:42.497 [2024-10-01 06:14:08.058688] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:42.497 [2024-10-01 06:14:08.058860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.497 [2024-10-01 06:14:08.058893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:42.497 [2024-10-01 06:14:08.062563] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:42.497 [2024-10-01 06:14:08.062750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.497 [2024-10-01 06:14:08.062767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:42.497 [2024-10-01 06:14:08.066560] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:42.497 [2024-10-01 06:14:08.066758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.497 [2024-10-01 06:14:08.066777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:42.497 [2024-10-01 06:14:08.070694] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:42.497 [2024-10-01 06:14:08.070866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.497 [2024-10-01 06:14:08.070898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:42.498 [2024-10-01 06:14:08.074583] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:42.498 [2024-10-01 06:14:08.074772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.498 [2024-10-01 06:14:08.074789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:42.498 [2024-10-01 06:14:08.078681] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:42.498 [2024-10-01 06:14:08.078868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.498 [2024-10-01 06:14:08.078884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:42.498 [2024-10-01 06:14:08.082703] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:42.498 [2024-10-01 06:14:08.082892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.498 [2024-10-01 06:14:08.082909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:42.498 [2024-10-01 06:14:08.086740] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:42.498 [2024-10-01 06:14:08.086953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.498 [2024-10-01 06:14:08.086977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:42.498 [2024-10-01 06:14:08.090852] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:42.498 [2024-10-01 06:14:08.091070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.498 [2024-10-01 06:14:08.091087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:42.498 [2024-10-01 06:14:08.094860] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:42.498 [2024-10-01 06:14:08.095057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.498 [2024-10-01 06:14:08.095074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:42.498 [2024-10-01 06:14:08.098798] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:42.498 [2024-10-01 06:14:08.099017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.498 [2024-10-01 06:14:08.099040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:42.498 [2024-10-01 06:14:08.102785] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:42.498 [2024-10-01 06:14:08.103021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.498 [2024-10-01 06:14:08.103044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:42.498 [2024-10-01 06:14:08.107282] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:42.498 [2024-10-01 06:14:08.107335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.498 [2024-10-01 06:14:08.107363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:42.759 [2024-10-01 06:14:08.111520] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:42.759 [2024-10-01 06:14:08.111557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.759 [2024-10-01 06:14:08.111585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:42.759 [2024-10-01 06:14:08.115291] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:42.759 [2024-10-01 06:14:08.115342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.759 [2024-10-01 06:14:08.115387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:42.759 [2024-10-01 06:14:08.119296] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:42.759 [2024-10-01 06:14:08.119332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.759 [2024-10-01 06:14:08.119360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:42.759 [2024-10-01 06:14:08.123105] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:42.759 [2024-10-01 06:14:08.123141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.759 [2024-10-01 06:14:08.123169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:42.759 [2024-10-01 06:14:08.126880] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:42.759 [2024-10-01 06:14:08.127101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.759 [2024-10-01 06:14:08.127126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:42.759 [2024-10-01 06:14:08.130875] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:42.759 [2024-10-01 06:14:08.131074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.759 [2024-10-01 06:14:08.131097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:42.759 [2024-10-01 06:14:08.134830] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:42.759 [2024-10-01 06:14:08.135030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.759 [2024-10-01 06:14:08.135054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:42.759 [2024-10-01 06:14:08.138823] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:42.759 [2024-10-01 06:14:08.139093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.759 [2024-10-01 06:14:08.139111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:42.759 [2024-10-01 06:14:08.142839] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:42.759 [2024-10-01 06:14:08.143061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.759 [2024-10-01 06:14:08.143086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:42.759 [2024-10-01 06:14:08.146979] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:42.759 [2024-10-01 06:14:08.147014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.759 [2024-10-01 06:14:08.147042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:42.759 [2024-10-01 06:14:08.150656] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:42.759 [2024-10-01 06:14:08.150829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.759 [2024-10-01 06:14:08.150862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:42.759 [2024-10-01 06:14:08.154864] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:42.759 [2024-10-01 06:14:08.155085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.759 [2024-10-01 06:14:08.155103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:42.759 [2024-10-01 06:14:08.158882] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:42.760 [2024-10-01 06:14:08.159062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.760 [2024-10-01 06:14:08.159094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:42.760 [2024-10-01 06:14:08.162869] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:42.760 [2024-10-01 06:14:08.163091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.760 [2024-10-01 06:14:08.163109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:42.760 [2024-10-01 06:14:08.166979] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:42.760 [2024-10-01 06:14:08.167014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.760 [2024-10-01 06:14:08.167041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:42.760 [2024-10-01 06:14:08.170663] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:42.760 [2024-10-01 06:14:08.170852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.760 [2024-10-01 06:14:08.170876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:42.760 [2024-10-01 06:14:08.174704] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:42.760 [2024-10-01 06:14:08.174892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.760 [2024-10-01 06:14:08.174949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:42.760 [2024-10-01 06:14:08.178594] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:42.760 [2024-10-01 06:14:08.178780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.760 [2024-10-01 06:14:08.178804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:42.760 [2024-10-01 06:14:08.182610] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:42.760 [2024-10-01 06:14:08.182797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.760 [2024-10-01 06:14:08.182820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:42.760 [2024-10-01 06:14:08.186582] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:42.760 [2024-10-01 06:14:08.186773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.760 [2024-10-01 06:14:08.186796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:42.760 [2024-10-01 06:14:08.190631] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:42.760 [2024-10-01 06:14:08.190819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.760 [2024-10-01 06:14:08.190842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:42.760 [2024-10-01 06:14:08.194720] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:42.760 [2024-10-01 06:14:08.194960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.760 [2024-10-01 06:14:08.194986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:42.760 [2024-10-01 06:14:08.198889] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:42.760 [2024-10-01 06:14:08.198934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.760 [2024-10-01 06:14:08.198963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:42.760 [2024-10-01 06:14:08.202689] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:42.760 [2024-10-01 06:14:08.202863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.760 [2024-10-01 06:14:08.202895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:42.760 [2024-10-01 06:14:08.206782] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:42.760 [2024-10-01 06:14:08.206982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.760 [2024-10-01 06:14:08.207020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:42.760 [2024-10-01 06:14:08.210816] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:42.760 [2024-10-01 06:14:08.211000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.760 [2024-10-01 06:14:08.211035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:42.760 [2024-10-01 06:14:08.214827] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:42.760 [2024-10-01 06:14:08.215027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.760 [2024-10-01 06:14:08.215066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:42.760 [2024-10-01 06:14:08.218859] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:42.760 [2024-10-01 06:14:08.219081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.760 [2024-10-01 06:14:08.219098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:42.760 [2024-10-01 06:14:08.222986] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:42.760 [2024-10-01 06:14:08.223021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.760 [2024-10-01 06:14:08.223050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:42.760 [2024-10-01 06:14:08.226718] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:42.760 [2024-10-01 06:14:08.226906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.760 [2024-10-01 06:14:08.226966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:42.760 [2024-10-01 06:14:08.230823] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:42.760 [2024-10-01 06:14:08.231046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.760 [2024-10-01 06:14:08.231070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:42.760 [2024-10-01 06:14:08.234762] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:42.760 [2024-10-01 06:14:08.234960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.760 [2024-10-01 06:14:08.234984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:42.760 [2024-10-01 06:14:08.238787] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:42.760 [2024-10-01 06:14:08.239011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.760 [2024-10-01 06:14:08.239035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:42.760 [2024-10-01 06:14:08.242812] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:42.760 [2024-10-01 06:14:08.243036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.760 [2024-10-01 06:14:08.243061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:42.760 [2024-10-01 06:14:08.246958] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:42.760 [2024-10-01 06:14:08.246993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.760 [2024-10-01 06:14:08.247021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:42.760 [2024-10-01 06:14:08.250705] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:42.760 [2024-10-01 06:14:08.250881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.760 [2024-10-01 06:14:08.250945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:42.760 [2024-10-01 06:14:08.254766] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:42.760 [2024-10-01 06:14:08.254948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.760 [2024-10-01 06:14:08.254982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:42.760 [2024-10-01 06:14:08.258838] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:42.760 [2024-10-01 06:14:08.259021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.760 [2024-10-01 06:14:08.259056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:42.760 [2024-10-01 06:14:08.262802] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:42.760 [2024-10-01 06:14:08.263020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.760 [2024-10-01 06:14:08.263044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:42.761 [2024-10-01 06:14:08.266853] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:42.761 [2024-10-01 06:14:08.267073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.761 [2024-10-01 06:14:08.267090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:42.761 [2024-10-01 06:14:08.270916] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:42.761 [2024-10-01 06:14:08.270951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.761 [2024-10-01 06:14:08.270979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:42.761 [2024-10-01 06:14:08.274629] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:42.761 [2024-10-01 06:14:08.274817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.761 [2024-10-01 06:14:08.274840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:42.761 [2024-10-01 06:14:08.278744] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:42.761 [2024-10-01 06:14:08.278963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.761 [2024-10-01 06:14:08.278988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:42.761 [2024-10-01 06:14:08.282765] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:42.761 [2024-10-01 06:14:08.282969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.761 [2024-10-01 06:14:08.282990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:42.761 [2024-10-01 06:14:08.286773] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:42.761 [2024-10-01 06:14:08.286971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.761 [2024-10-01 06:14:08.286994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:42.761 [2024-10-01 06:14:08.290739] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:42.761 [2024-10-01 06:14:08.290959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.761 [2024-10-01 06:14:08.290985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:42.761 [2024-10-01 06:14:08.294671] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:42.761 [2024-10-01 06:14:08.294858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.761 [2024-10-01 06:14:08.294881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:42.761 [2024-10-01 06:14:08.298577] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:42.761 [2024-10-01 06:14:08.298765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.761 [2024-10-01 06:14:08.298789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:42.761 [2024-10-01 06:14:08.302616] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:42.761 [2024-10-01 06:14:08.302804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.761 [2024-10-01 06:14:08.302827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:42.761 [2024-10-01 06:14:08.306634] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:42.761 [2024-10-01 06:14:08.306821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.761 [2024-10-01 06:14:08.306840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:42.761 [2024-10-01 06:14:08.310683] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:42.761 [2024-10-01 06:14:08.310870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.761 [2024-10-01 06:14:08.310892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:42.761 [2024-10-01 06:14:08.314698] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:42.761 [2024-10-01 06:14:08.314887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.761 [2024-10-01 06:14:08.314929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:42.761 [2024-10-01 06:14:08.318728] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:42.761 [2024-10-01 06:14:08.318948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.761 [2024-10-01 06:14:08.318973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:42.761 [2024-10-01 06:14:08.322738] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:42.761 [2024-10-01 06:14:08.322957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.761 [2024-10-01 06:14:08.322983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:42.761 [2024-10-01 06:14:08.326782] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:42.761 [2024-10-01 06:14:08.327003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.761 [2024-10-01 06:14:08.327028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:42.761 [2024-10-01 06:14:08.330786] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:42.761 [2024-10-01 06:14:08.331012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.761 [2024-10-01 06:14:08.331035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:42.761 [2024-10-01 06:14:08.334752] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:42.761 [2024-10-01 06:14:08.334969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.761 [2024-10-01 06:14:08.334994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:42.761 [2024-10-01 06:14:08.338683] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:42.761 [2024-10-01 06:14:08.338869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.761 [2024-10-01 06:14:08.338894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:42.761 [2024-10-01 06:14:08.342637] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:42.761 [2024-10-01 06:14:08.342823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.761 [2024-10-01 06:14:08.342847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:42.761 [2024-10-01 06:14:08.346698] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:42.761 [2024-10-01 06:14:08.346886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.761 [2024-10-01 06:14:08.346929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:42.761 [2024-10-01 06:14:08.350665] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:42.761 [2024-10-01 06:14:08.350853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.761 [2024-10-01 06:14:08.350877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:42.761 [2024-10-01 06:14:08.354802] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:42.761 [2024-10-01 06:14:08.355030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.761 [2024-10-01 06:14:08.355053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:42.761 [2024-10-01 06:14:08.358898] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:42.761 [2024-10-01 06:14:08.359118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.761 [2024-10-01 06:14:08.359142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:42.761 [2024-10-01 06:14:08.362873] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:42.761 [2024-10-01 06:14:08.363096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.761 [2024-10-01 06:14:08.363121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:42.761 [2024-10-01 06:14:08.366844] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:42.761 [2024-10-01 06:14:08.367063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.761 [2024-10-01 06:14:08.367088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:42.761 [2024-10-01 06:14:08.371175] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:42.761 [2024-10-01 06:14:08.371243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:42.761 [2024-10-01 06:14:08.371271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:43.023 [2024-10-01 06:14:08.375286] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:43.024 [2024-10-01 06:14:08.375321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.024 [2024-10-01 06:14:08.375350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:43.024 [2024-10-01 06:14:08.379017] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:43.024 [2024-10-01 06:14:08.379067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.024 [2024-10-01 06:14:08.379095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:43.024 [2024-10-01 06:14:08.383069] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:43.024 [2024-10-01 06:14:08.383104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.024 [2024-10-01 06:14:08.383133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:43.024 [2024-10-01 06:14:08.386759] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:43.024 [2024-10-01 06:14:08.386981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.024 [2024-10-01 06:14:08.387006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:43.024 [2024-10-01 06:14:08.390710] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:43.024 [2024-10-01 06:14:08.390899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.024 [2024-10-01 06:14:08.390939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:43.024 [2024-10-01 06:14:08.394717] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:43.024 [2024-10-01 06:14:08.394905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.024 [2024-10-01 06:14:08.394942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:43.024 [2024-10-01 06:14:08.398652] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:43.024 [2024-10-01 06:14:08.398841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.024 [2024-10-01 06:14:08.398865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:43.024 [2024-10-01 06:14:08.402640] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:43.024 [2024-10-01 06:14:08.402829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.024 [2024-10-01 06:14:08.402853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:43.024 [2024-10-01 06:14:08.406751] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:43.024 [2024-10-01 06:14:08.406970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.024 [2024-10-01 06:14:08.406995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:43.024 [2024-10-01 06:14:08.410787] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:43.024 [2024-10-01 06:14:08.411011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.024 [2024-10-01 06:14:08.411036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:43.024 [2024-10-01 06:14:08.415018] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:43.024 [2024-10-01 06:14:08.415054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.024 [2024-10-01 06:14:08.415082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:43.024 [2024-10-01 06:14:08.418714] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:43.024 [2024-10-01 06:14:08.418902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.024 [2024-10-01 06:14:08.418952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:43.024 [2024-10-01 06:14:08.422822] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:43.024 [2024-10-01 06:14:08.423023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.024 [2024-10-01 06:14:08.423060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:43.024 [2024-10-01 06:14:08.426723] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:43.024 [2024-10-01 06:14:08.426923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.024 [2024-10-01 06:14:08.426962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:43.024 [2024-10-01 06:14:08.430741] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:43.024 [2024-10-01 06:14:08.430956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.024 [2024-10-01 06:14:08.430982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:43.024 [2024-10-01 06:14:08.434796] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:43.024 [2024-10-01 06:14:08.435012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.024 [2024-10-01 06:14:08.435036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:43.024 [2024-10-01 06:14:08.438801] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:43.024 [2024-10-01 06:14:08.439004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.024 [2024-10-01 06:14:08.439042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:43.024 [2024-10-01 06:14:08.442755] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:43.024 [2024-10-01 06:14:08.442970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.024 [2024-10-01 06:14:08.442996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:43.024 [2024-10-01 06:14:08.446840] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:43.024 [2024-10-01 06:14:08.447038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.024 [2024-10-01 06:14:08.447057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:43.024 [2024-10-01 06:14:08.450805] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:43.024 [2024-10-01 06:14:08.451008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.024 [2024-10-01 06:14:08.451047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:43.024 [2024-10-01 06:14:08.454896] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:43.024 [2024-10-01 06:14:08.455115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.024 [2024-10-01 06:14:08.455132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:43.024 [2024-10-01 06:14:08.458933] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:43.024 [2024-10-01 06:14:08.458968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.024 [2024-10-01 06:14:08.458999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:43.024 [2024-10-01 06:14:08.462646] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:43.024 [2024-10-01 06:14:08.462834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.024 [2024-10-01 06:14:08.462858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:43.024 [2024-10-01 06:14:08.466620] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:43.024 [2024-10-01 06:14:08.466811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.024 [2024-10-01 06:14:08.466835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:43.024 [2024-10-01 06:14:08.470638] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:43.024 [2024-10-01 06:14:08.470827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.024 [2024-10-01 06:14:08.470851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:43.024 [2024-10-01 06:14:08.474639] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:43.024 [2024-10-01 06:14:08.474826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.024 [2024-10-01 06:14:08.474846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:43.024 [2024-10-01 06:14:08.478729] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:43.024 [2024-10-01 06:14:08.478948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.024 [2024-10-01 06:14:08.478974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:43.024 [2024-10-01 06:14:08.482716] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:43.025 [2024-10-01 06:14:08.482903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.025 [2024-10-01 06:14:08.482939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:43.025 [2024-10-01 06:14:08.486664] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:43.025 [2024-10-01 06:14:08.486852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.025 [2024-10-01 06:14:08.486877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:43.025 [2024-10-01 06:14:08.490718] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:43.025 [2024-10-01 06:14:08.490906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.025 [2024-10-01 06:14:08.490961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:43.025 [2024-10-01 06:14:08.494633] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:43.025 [2024-10-01 06:14:08.494822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.025 [2024-10-01 06:14:08.494846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:43.025 [2024-10-01 06:14:08.498710] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:43.025 [2024-10-01 06:14:08.498897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.025 [2024-10-01 06:14:08.498955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:43.025 [2024-10-01 06:14:08.502684] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:43.025 [2024-10-01 06:14:08.502871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.025 [2024-10-01 06:14:08.502894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:43.025 [2024-10-01 06:14:08.506697] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:43.025 [2024-10-01 06:14:08.506886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.025 [2024-10-01 06:14:08.506909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:43.025 [2024-10-01 06:14:08.510603] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:43.025 [2024-10-01 06:14:08.510790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.025 [2024-10-01 06:14:08.510814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:43.025 [2024-10-01 06:14:08.514607] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:43.025 [2024-10-01 06:14:08.514794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.025 [2024-10-01 06:14:08.514817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:43.025 [2024-10-01 06:14:08.518649] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:43.025 [2024-10-01 06:14:08.518837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.025 [2024-10-01 06:14:08.518860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:43.025 [2024-10-01 06:14:08.522683] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:43.025 [2024-10-01 06:14:08.522871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.025 [2024-10-01 06:14:08.522895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:43.025 [2024-10-01 06:14:08.526761] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:43.025 [2024-10-01 06:14:08.526980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.025 [2024-10-01 06:14:08.527006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:43.025 [2024-10-01 06:14:08.530729] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:43.025 [2024-10-01 06:14:08.530949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.025 [2024-10-01 06:14:08.530975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:43.025 [2024-10-01 06:14:08.534721] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:43.025 [2024-10-01 06:14:08.534906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.025 [2024-10-01 06:14:08.534966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:43.025 [2024-10-01 06:14:08.538682] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:43.025 [2024-10-01 06:14:08.538871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.025 [2024-10-01 06:14:08.538895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:43.025 [2024-10-01 06:14:08.542630] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:43.025 [2024-10-01 06:14:08.542817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.025 [2024-10-01 06:14:08.542841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:43.025 [2024-10-01 06:14:08.546722] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:43.025 [2024-10-01 06:14:08.546944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.025 [2024-10-01 06:14:08.546970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:43.025 [2024-10-01 06:14:08.550727] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:43.025 [2024-10-01 06:14:08.550948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.025 [2024-10-01 06:14:08.550974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:43.025 [2024-10-01 06:14:08.554878] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:43.025 [2024-10-01 06:14:08.555099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.025 [2024-10-01 06:14:08.555123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:43.025 [2024-10-01 06:14:08.558860] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:43.025 [2024-10-01 06:14:08.559067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.025 [2024-10-01 06:14:08.559105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:43.025 [2024-10-01 06:14:08.562806] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:43.025 [2024-10-01 06:14:08.563028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.025 [2024-10-01 06:14:08.563053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:43.025 [2024-10-01 06:14:08.566744] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:43.025 [2024-10-01 06:14:08.566961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.025 [2024-10-01 06:14:08.566985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:43.025 [2024-10-01 06:14:08.570765] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:43.025 [2024-10-01 06:14:08.570986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.025 [2024-10-01 06:14:08.571011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:43.025 [2024-10-01 06:14:08.574731] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:43.025 [2024-10-01 06:14:08.574949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.025 [2024-10-01 06:14:08.574975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:43.025 [2024-10-01 06:14:08.578660] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:43.025 [2024-10-01 06:14:08.578851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.025 [2024-10-01 06:14:08.578874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:43.025 [2024-10-01 06:14:08.582641] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:43.025 [2024-10-01 06:14:08.582830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.025 [2024-10-01 06:14:08.582855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:43.025 [2024-10-01 06:14:08.586647] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:43.025 [2024-10-01 06:14:08.586834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.025 [2024-10-01 06:14:08.586857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:43.025 [2024-10-01 06:14:08.590637] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:43.025 [2024-10-01 06:14:08.590824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.026 [2024-10-01 06:14:08.590848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:43.026 [2024-10-01 06:14:08.594600] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:43.026 [2024-10-01 06:14:08.594786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.026 [2024-10-01 06:14:08.594808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:43.026 [2024-10-01 06:14:08.598644] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:43.026 [2024-10-01 06:14:08.598834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.026 [2024-10-01 06:14:08.598858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:43.026 [2024-10-01 06:14:08.602670] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:43.026 [2024-10-01 06:14:08.602858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.026 [2024-10-01 06:14:08.602878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:43.026 [2024-10-01 06:14:08.606828] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:43.026 [2024-10-01 06:14:08.607049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.026 [2024-10-01 06:14:08.607074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:43.026 [2024-10-01 06:14:08.610807] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:43.026 [2024-10-01 06:14:08.611030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.026 [2024-10-01 06:14:08.611071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:43.026 [2024-10-01 06:14:08.614797] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:43.026 [2024-10-01 06:14:08.615019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.026 [2024-10-01 06:14:08.615044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:43.026 [2024-10-01 06:14:08.618799] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:43.026 [2024-10-01 06:14:08.619020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.026 [2024-10-01 06:14:08.619045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:43.026 [2024-10-01 06:14:08.622848] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:43.026 [2024-10-01 06:14:08.623069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.026 [2024-10-01 06:14:08.623093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:43.026 [2024-10-01 06:14:08.626925] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:43.026 [2024-10-01 06:14:08.626960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.026 [2024-10-01 06:14:08.626988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:43.026 [2024-10-01 06:14:08.630719] nvme_tcp.c:1470:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x15cef50) 00:21:43.026 [2024-10-01 06:14:08.630908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:43.026 [2024-10-01 06:14:08.630957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:43.286 7843.00 IOPS, 980.38 MiB/s 00:21:43.286 Latency(us) 00:21:43.286 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:43.286 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:21:43.286 nvme0n1 : 2.00 7839.71 979.96 0.00 0.00 2038.09 1616.06 5570.56 00:21:43.286 =================================================================================================================== 00:21:43.286 Total : 7839.71 979.96 0.00 0.00 2038.09 1616.06 5570.56 00:21:43.286 { 00:21:43.286 "results": [ 00:21:43.286 { 00:21:43.286 "job": "nvme0n1", 00:21:43.286 "core_mask": "0x2", 00:21:43.286 "workload": "randread", 00:21:43.286 "status": "finished", 00:21:43.286 "queue_depth": 16, 00:21:43.286 "io_size": 131072, 00:21:43.286 "runtime": 2.002879, 00:21:43.286 "iops": 7839.714730645236, 00:21:43.286 "mibps": 979.9643413306545, 00:21:43.286 "io_failed": 0, 00:21:43.286 "io_timeout": 0, 00:21:43.286 "avg_latency_us": 2038.0850059633399, 00:21:43.286 "min_latency_us": 1616.0581818181818, 00:21:43.286 "max_latency_us": 5570.56 00:21:43.286 } 00:21:43.286 ], 00:21:43.286 "core_count": 1 00:21:43.286 } 00:21:43.286 06:14:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:21:43.286 06:14:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:21:43.286 06:14:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:21:43.286 06:14:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:21:43.286 | .driver_specific 00:21:43.286 | .nvme_error 00:21:43.286 | .status_code 00:21:43.286 | .command_transient_transport_error' 00:21:43.546 06:14:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 506 > 0 )) 00:21:43.546 06:14:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 94574 00:21:43.546 06:14:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 94574 ']' 00:21:43.546 06:14:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 94574 00:21:43.546 06:14:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:21:43.546 06:14:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:43.546 06:14:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 94574 00:21:43.546 06:14:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:21:43.546 06:14:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:21:43.546 killing process with pid 94574 00:21:43.546 06:14:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 94574' 00:21:43.546 06:14:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 94574 00:21:43.546 Received shutdown signal, test time was about 2.000000 seconds 00:21:43.546 00:21:43.546 Latency(us) 00:21:43.546 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:43.546 =================================================================================================================== 00:21:43.546 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:43.546 06:14:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 94574 00:21:43.546 06:14:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:21:43.546 06:14:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:21:43.546 06:14:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:21:43.546 06:14:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:21:43.546 06:14:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:21:43.546 06:14:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:21:43.546 06:14:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=94627 00:21:43.546 06:14:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 94627 /var/tmp/bperf.sock 00:21:43.546 06:14:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 94627 ']' 00:21:43.546 06:14:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:43.546 06:14:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:43.546 06:14:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:43.546 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:43.546 06:14:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:43.546 06:14:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:43.546 [2024-10-01 06:14:09.150673] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:21:43.546 [2024-10-01 06:14:09.150930] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94627 ] 00:21:43.805 [2024-10-01 06:14:09.282329] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:43.805 [2024-10-01 06:14:09.315025] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:21:43.805 [2024-10-01 06:14:09.342482] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:43.805 06:14:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:43.805 06:14:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:21:43.805 06:14:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:21:43.805 06:14:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:21:44.372 06:14:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:21:44.372 06:14:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:44.372 06:14:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:44.372 06:14:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:44.372 06:14:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:44.373 06:14:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:44.373 nvme0n1 00:21:44.632 06:14:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:21:44.632 06:14:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:44.632 06:14:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:44.632 06:14:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:44.632 06:14:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:21:44.632 06:14:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:21:44.632 Running I/O for 2 seconds... 00:21:44.632 [2024-10-01 06:14:10.132140] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e210) with pdu=0x2000198fef90 00:21:44.632 [2024-10-01 06:14:10.134801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14026 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.632 [2024-10-01 06:14:10.135054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:44.632 [2024-10-01 06:14:10.147700] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e210) with pdu=0x2000198feb58 00:21:44.632 [2024-10-01 06:14:10.150130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24937 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.632 [2024-10-01 06:14:10.150166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:44.632 [2024-10-01 06:14:10.162160] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e210) with pdu=0x2000198fe2e8 00:21:44.632 [2024-10-01 06:14:10.164706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:3916 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.632 [2024-10-01 06:14:10.164894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:21:44.632 [2024-10-01 06:14:10.177178] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e210) with pdu=0x2000198fda78 00:21:44.632 [2024-10-01 06:14:10.179624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:7651 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.632 [2024-10-01 06:14:10.179659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:21:44.632 [2024-10-01 06:14:10.191855] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e210) with pdu=0x2000198fd208 00:21:44.632 [2024-10-01 06:14:10.194290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:19906 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.632 [2024-10-01 06:14:10.194325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:21:44.632 [2024-10-01 06:14:10.206060] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e210) with pdu=0x2000198fc998 00:21:44.632 [2024-10-01 06:14:10.208358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:3560 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.632 [2024-10-01 06:14:10.208392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:21:44.632 [2024-10-01 06:14:10.220289] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e210) with pdu=0x2000198fc128 00:21:44.632 [2024-10-01 06:14:10.222414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:5369 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.632 [2024-10-01 06:14:10.222448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:21:44.632 [2024-10-01 06:14:10.234526] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e210) with pdu=0x2000198fb8b8 00:21:44.632 [2024-10-01 06:14:10.236783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8027 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.632 [2024-10-01 06:14:10.237010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:21:44.893 [2024-10-01 06:14:10.249908] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e210) with pdu=0x2000198fb048 00:21:44.893 [2024-10-01 06:14:10.252491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:1045 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.893 [2024-10-01 06:14:10.252680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:21:44.893 [2024-10-01 06:14:10.264776] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e210) with pdu=0x2000198fa7d8 00:21:44.893 [2024-10-01 06:14:10.267137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:11574 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.893 [2024-10-01 06:14:10.267172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:21:44.893 [2024-10-01 06:14:10.279155] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e210) with pdu=0x2000198f9f68 00:21:44.893 [2024-10-01 06:14:10.281304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:3025 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.893 [2024-10-01 06:14:10.281338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:21:44.893 [2024-10-01 06:14:10.293443] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e210) with pdu=0x2000198f96f8 00:21:44.893 [2024-10-01 06:14:10.295553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:2619 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.893 [2024-10-01 06:14:10.295588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:21:44.893 [2024-10-01 06:14:10.307654] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e210) with pdu=0x2000198f8e88 00:21:44.893 [2024-10-01 06:14:10.309864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:21154 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.893 [2024-10-01 06:14:10.309925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:21:44.893 [2024-10-01 06:14:10.321953] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e210) with pdu=0x2000198f8618 00:21:44.893 [2024-10-01 06:14:10.324030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:22178 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.893 [2024-10-01 06:14:10.324241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:21:44.893 [2024-10-01 06:14:10.336096] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e210) with pdu=0x2000198f7da8 00:21:44.893 [2024-10-01 06:14:10.338266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:12270 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.893 [2024-10-01 06:14:10.338299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:21:44.893 [2024-10-01 06:14:10.349723] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e210) with pdu=0x2000198f7538 00:21:44.893 [2024-10-01 06:14:10.351701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:2249 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.893 [2024-10-01 06:14:10.351733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:21:44.893 [2024-10-01 06:14:10.363359] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e210) with pdu=0x2000198f6cc8 00:21:44.893 [2024-10-01 06:14:10.365313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:15931 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.893 [2024-10-01 06:14:10.365346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:44.893 [2024-10-01 06:14:10.376496] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e210) with pdu=0x2000198f6458 00:21:44.893 [2024-10-01 06:14:10.378411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:18662 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.893 [2024-10-01 06:14:10.378443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:21:44.893 [2024-10-01 06:14:10.389663] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e210) with pdu=0x2000198f5be8 00:21:44.893 [2024-10-01 06:14:10.391705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:20570 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.893 [2024-10-01 06:14:10.391733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:21:44.893 [2024-10-01 06:14:10.403167] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e210) with pdu=0x2000198f5378 00:21:44.893 [2024-10-01 06:14:10.405084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:25528 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.893 [2024-10-01 06:14:10.405117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:21:44.893 [2024-10-01 06:14:10.416463] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e210) with pdu=0x2000198f4b08 00:21:44.893 [2024-10-01 06:14:10.418418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:21920 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.893 [2024-10-01 06:14:10.418446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:21:44.893 [2024-10-01 06:14:10.429776] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e210) with pdu=0x2000198f4298 00:21:44.893 [2024-10-01 06:14:10.431678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:10290 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.893 [2024-10-01 06:14:10.431709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:21:44.893 [2024-10-01 06:14:10.443054] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e210) with pdu=0x2000198f3a28 00:21:44.893 [2024-10-01 06:14:10.444917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:20750 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.893 [2024-10-01 06:14:10.444976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:21:44.893 [2024-10-01 06:14:10.456483] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e210) with pdu=0x2000198f31b8 00:21:44.893 [2024-10-01 06:14:10.458286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:7215 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.893 [2024-10-01 06:14:10.458334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:21:44.893 [2024-10-01 06:14:10.469705] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e210) with pdu=0x2000198f2948 00:21:44.893 [2024-10-01 06:14:10.471702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:7044 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.893 [2024-10-01 06:14:10.471875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:21:44.893 [2024-10-01 06:14:10.483375] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e210) with pdu=0x2000198f20d8 00:21:44.893 [2024-10-01 06:14:10.485328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:8227 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.893 [2024-10-01 06:14:10.485530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:21:44.893 [2024-10-01 06:14:10.497228] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e210) with pdu=0x2000198f1868 00:21:44.893 [2024-10-01 06:14:10.499083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:8436 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:44.893 [2024-10-01 06:14:10.499284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:21:45.153 [2024-10-01 06:14:10.512148] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e210) with pdu=0x2000198f0ff8 00:21:45.153 [2024-10-01 06:14:10.514132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:3524 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.153 [2024-10-01 06:14:10.514337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:21:45.153 [2024-10-01 06:14:10.525996] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e210) with pdu=0x2000198f0788 00:21:45.153 [2024-10-01 06:14:10.527811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:2185 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.153 [2024-10-01 06:14:10.528090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:21:45.153 [2024-10-01 06:14:10.539613] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e210) with pdu=0x2000198eff18 00:21:45.153 [2024-10-01 06:14:10.541557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:17861 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.153 [2024-10-01 06:14:10.541770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:21:45.153 [2024-10-01 06:14:10.553630] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e210) with pdu=0x2000198ef6a8 00:21:45.153 [2024-10-01 06:14:10.555555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:21477 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.153 [2024-10-01 06:14:10.555772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:21:45.153 [2024-10-01 06:14:10.567402] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e210) with pdu=0x2000198eee38 00:21:45.153 [2024-10-01 06:14:10.569428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:20604 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.153 [2024-10-01 06:14:10.569649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:21:45.153 [2024-10-01 06:14:10.581266] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e210) with pdu=0x2000198ee5c8 00:21:45.153 [2024-10-01 06:14:10.583027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:19776 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.153 [2024-10-01 06:14:10.583236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:45.153 [2024-10-01 06:14:10.594780] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e210) with pdu=0x2000198edd58 00:21:45.153 [2024-10-01 06:14:10.596650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:16791 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.153 [2024-10-01 06:14:10.596845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:21:45.153 [2024-10-01 06:14:10.608753] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e210) with pdu=0x2000198ed4e8 00:21:45.153 [2024-10-01 06:14:10.610521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:3242 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.153 [2024-10-01 06:14:10.610697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:21:45.153 [2024-10-01 06:14:10.622385] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e210) with pdu=0x2000198ecc78 00:21:45.153 [2024-10-01 06:14:10.624041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:8909 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.153 [2024-10-01 06:14:10.624228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:21:45.153 [2024-10-01 06:14:10.635817] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e210) with pdu=0x2000198ec408 00:21:45.153 [2024-10-01 06:14:10.637532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:21547 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.153 [2024-10-01 06:14:10.637565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:21:45.153 [2024-10-01 06:14:10.649153] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e210) with pdu=0x2000198ebb98 00:21:45.153 [2024-10-01 06:14:10.650706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:22919 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.153 [2024-10-01 06:14:10.650739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:21:45.153 [2024-10-01 06:14:10.662426] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e210) with pdu=0x2000198eb328 00:21:45.154 [2024-10-01 06:14:10.664168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:14077 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.154 [2024-10-01 06:14:10.664230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:21:45.154 [2024-10-01 06:14:10.675686] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e210) with pdu=0x2000198eaab8 00:21:45.154 [2024-10-01 06:14:10.677539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:12187 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.154 [2024-10-01 06:14:10.677571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:21:45.154 [2024-10-01 06:14:10.689224] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e210) with pdu=0x2000198ea248 00:21:45.154 [2024-10-01 06:14:10.690728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:6074 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.154 [2024-10-01 06:14:10.690762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:21:45.154 [2024-10-01 06:14:10.702393] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e210) with pdu=0x2000198e99d8 00:21:45.154 [2024-10-01 06:14:10.704194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:4849 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.154 [2024-10-01 06:14:10.704415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:21:45.154 [2024-10-01 06:14:10.716172] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e210) with pdu=0x2000198e9168 00:21:45.154 [2024-10-01 06:14:10.717779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:12680 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.154 [2024-10-01 06:14:10.718020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:21:45.154 [2024-10-01 06:14:10.730195] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e210) with pdu=0x2000198e88f8 00:21:45.154 [2024-10-01 06:14:10.731841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:15421 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.154 [2024-10-01 06:14:10.732106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:21:45.154 [2024-10-01 06:14:10.745438] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e210) with pdu=0x2000198e8088 00:21:45.154 [2024-10-01 06:14:10.747535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:79 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.154 [2024-10-01 06:14:10.747803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:21:45.154 [2024-10-01 06:14:10.761111] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e210) with pdu=0x2000198e7818 00:21:45.154 [2024-10-01 06:14:10.762711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:11428 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.154 [2024-10-01 06:14:10.762925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:21:45.413 [2024-10-01 06:14:10.776084] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e210) with pdu=0x2000198e6fa8 00:21:45.413 [2024-10-01 06:14:10.777667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:7141 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.413 [2024-10-01 06:14:10.777869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:21:45.414 [2024-10-01 06:14:10.789836] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e210) with pdu=0x2000198e6738 00:21:45.414 [2024-10-01 06:14:10.791458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:11896 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.414 [2024-10-01 06:14:10.791627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:21:45.414 [2024-10-01 06:14:10.803631] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e210) with pdu=0x2000198e5ec8 00:21:45.414 [2024-10-01 06:14:10.805265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:17981 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.414 [2024-10-01 06:14:10.805502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:45.414 [2024-10-01 06:14:10.817552] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e210) with pdu=0x2000198e5658 00:21:45.414 [2024-10-01 06:14:10.818984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:22749 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.414 [2024-10-01 06:14:10.819034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:21:45.414 [2024-10-01 06:14:10.831516] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e210) with pdu=0x2000198e4de8 00:21:45.414 [2024-10-01 06:14:10.833009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:11419 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.414 [2024-10-01 06:14:10.833059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:21:45.414 [2024-10-01 06:14:10.845476] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e210) with pdu=0x2000198e4578 00:21:45.414 [2024-10-01 06:14:10.846865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:4736 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.414 [2024-10-01 06:14:10.846924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:21:45.414 [2024-10-01 06:14:10.859037] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e210) with pdu=0x2000198e3d08 00:21:45.414 [2024-10-01 06:14:10.860495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:21443 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.414 [2024-10-01 06:14:10.860684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:21:45.414 [2024-10-01 06:14:10.872731] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e210) with pdu=0x2000198e3498 00:21:45.414 [2024-10-01 06:14:10.874178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:6984 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.414 [2024-10-01 06:14:10.874396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:21:45.414 [2024-10-01 06:14:10.886472] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e210) with pdu=0x2000198e2c28 00:21:45.414 [2024-10-01 06:14:10.887943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:19238 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.414 [2024-10-01 06:14:10.888142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:21:45.414 [2024-10-01 06:14:10.900761] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e210) with pdu=0x2000198e23b8 00:21:45.414 [2024-10-01 06:14:10.902377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:2302 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.414 [2024-10-01 06:14:10.902583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:21:45.414 [2024-10-01 06:14:10.916961] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e210) with pdu=0x2000198e1b48 00:21:45.414 [2024-10-01 06:14:10.918671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:4885 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.414 [2024-10-01 06:14:10.918895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:21:45.414 [2024-10-01 06:14:10.932656] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e210) with pdu=0x2000198e12d8 00:21:45.414 [2024-10-01 06:14:10.934230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:3844 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.414 [2024-10-01 06:14:10.934518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:21:45.414 [2024-10-01 06:14:10.947220] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e210) with pdu=0x2000198e0a68 00:21:45.414 [2024-10-01 06:14:10.948745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:5615 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.414 [2024-10-01 06:14:10.949002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:21:45.414 [2024-10-01 06:14:10.961472] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e210) with pdu=0x2000198e01f8 00:21:45.414 [2024-10-01 06:14:10.962824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:14597 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.414 [2024-10-01 06:14:10.963065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:21:45.414 [2024-10-01 06:14:10.975249] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e210) with pdu=0x2000198df988 00:21:45.414 [2024-10-01 06:14:10.976792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:3155 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.414 [2024-10-01 06:14:10.977009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:21:45.414 [2024-10-01 06:14:10.989139] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e210) with pdu=0x2000198df118 00:21:45.414 [2024-10-01 06:14:10.990355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:19615 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.414 [2024-10-01 06:14:10.990388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:21:45.414 [2024-10-01 06:14:11.002429] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e210) with pdu=0x2000198de8a8 00:21:45.414 [2024-10-01 06:14:11.003605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:10318 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.414 [2024-10-01 06:14:11.003638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:21:45.414 [2024-10-01 06:14:11.015807] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e210) with pdu=0x2000198de038 00:21:45.414 [2024-10-01 06:14:11.017118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14874 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.414 [2024-10-01 06:14:11.017151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:21:45.674 [2024-10-01 06:14:11.035973] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e210) with pdu=0x2000198de038 00:21:45.674 [2024-10-01 06:14:11.038206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:2958 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.674 [2024-10-01 06:14:11.038240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.674 [2024-10-01 06:14:11.049502] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e210) with pdu=0x2000198de8a8 00:21:45.674 [2024-10-01 06:14:11.051648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:18453 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.674 [2024-10-01 06:14:11.051679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:21:45.674 [2024-10-01 06:14:11.063055] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e210) with pdu=0x2000198df118 00:21:45.674 [2024-10-01 06:14:11.065256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:10383 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.674 [2024-10-01 06:14:11.065289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:21:45.674 [2024-10-01 06:14:11.076425] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e210) with pdu=0x2000198df988 00:21:45.674 [2024-10-01 06:14:11.078659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:10805 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.674 [2024-10-01 06:14:11.078687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:21:45.674 [2024-10-01 06:14:11.089856] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e210) with pdu=0x2000198e01f8 00:21:45.674 [2024-10-01 06:14:11.092007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:19146 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.674 [2024-10-01 06:14:11.092197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:21:45.674 [2024-10-01 06:14:11.103415] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e210) with pdu=0x2000198e0a68 00:21:45.674 [2024-10-01 06:14:11.105628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:25301 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.674 [2024-10-01 06:14:11.105661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:21:45.674 17965.00 IOPS, 70.18 MiB/s [2024-10-01 06:14:11.114747] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e210) with pdu=0x2000198feb58 00:21:45.674 [2024-10-01 06:14:11.115102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:25065 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.674 [2024-10-01 06:14:11.115282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:45.674 [2024-10-01 06:14:11.125567] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e210) with pdu=0x2000198feb58 00:21:45.674 [2024-10-01 06:14:11.125928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:6073 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.674 [2024-10-01 06:14:11.126150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:45.674 [2024-10-01 06:14:11.136564] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e210) with pdu=0x2000198feb58 00:21:45.674 [2024-10-01 06:14:11.136887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:16373 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.674 [2024-10-01 06:14:11.136930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:45.674 [2024-10-01 06:14:11.147846] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e210) with pdu=0x2000198feb58 00:21:45.674 [2024-10-01 06:14:11.148254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:2492 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.674 [2024-10-01 06:14:11.148282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:45.674 [2024-10-01 06:14:11.158938] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e210) with pdu=0x2000198feb58 00:21:45.674 [2024-10-01 06:14:11.159116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:1671 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.674 [2024-10-01 06:14:11.159136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:45.674 [2024-10-01 06:14:11.169639] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e210) with pdu=0x2000198feb58 00:21:45.674 [2024-10-01 06:14:11.169826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:7037 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.674 [2024-10-01 06:14:11.169846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:45.674 [2024-10-01 06:14:11.180601] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e210) with pdu=0x2000198feb58 00:21:45.674 [2024-10-01 06:14:11.180781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:25332 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.674 [2024-10-01 06:14:11.180802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:45.674 [2024-10-01 06:14:11.191299] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e210) with pdu=0x2000198feb58 00:21:45.674 [2024-10-01 06:14:11.191476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:14102 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.674 [2024-10-01 06:14:11.191496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:45.674 [2024-10-01 06:14:11.202305] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e210) with pdu=0x2000198feb58 00:21:45.674 [2024-10-01 06:14:11.202633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:386 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.674 [2024-10-01 06:14:11.202654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:45.674 [2024-10-01 06:14:11.213302] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e210) with pdu=0x2000198feb58 00:21:45.674 [2024-10-01 06:14:11.213648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:709 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.674 [2024-10-01 06:14:11.213670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:45.674 [2024-10-01 06:14:11.224342] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e210) with pdu=0x2000198feb58 00:21:45.674 [2024-10-01 06:14:11.224674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:11158 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.674 [2024-10-01 06:14:11.224695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:45.674 [2024-10-01 06:14:11.235289] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e210) with pdu=0x2000198feb58 00:21:45.674 [2024-10-01 06:14:11.235622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:16546 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.674 [2024-10-01 06:14:11.235644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:45.674 [2024-10-01 06:14:11.246406] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e210) with pdu=0x2000198feb58 00:21:45.674 [2024-10-01 06:14:11.246727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:17635 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.674 [2024-10-01 06:14:11.246750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:45.674 [2024-10-01 06:14:11.257531] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e210) with pdu=0x2000198feb58 00:21:45.674 [2024-10-01 06:14:11.257846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:18761 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.674 [2024-10-01 06:14:11.257867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:45.674 [2024-10-01 06:14:11.268610] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e210) with pdu=0x2000198feb58 00:21:45.674 [2024-10-01 06:14:11.268982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:9118 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.674 [2024-10-01 06:14:11.269007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:45.674 [2024-10-01 06:14:11.279529] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e210) with pdu=0x2000198feb58 00:21:45.674 [2024-10-01 06:14:11.279848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:17806 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.674 [2024-10-01 06:14:11.279868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:45.936 [2024-10-01 06:14:11.291295] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e210) with pdu=0x2000198feb58 00:21:45.936 [2024-10-01 06:14:11.291509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:1957 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.936 [2024-10-01 06:14:11.291530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:45.936 [2024-10-01 06:14:11.302370] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e210) with pdu=0x2000198feb58 00:21:45.936 [2024-10-01 06:14:11.302690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:17196 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.936 [2024-10-01 06:14:11.302712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:45.936 [2024-10-01 06:14:11.313442] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e210) with pdu=0x2000198feb58 00:21:45.936 [2024-10-01 06:14:11.313761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:15547 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.936 [2024-10-01 06:14:11.313782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:45.936 [2024-10-01 06:14:11.325312] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e210) with pdu=0x2000198feb58 00:21:45.936 [2024-10-01 06:14:11.325496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:17525 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.936 [2024-10-01 06:14:11.325517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:45.936 [2024-10-01 06:14:11.337547] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e210) with pdu=0x2000198feb58 00:21:45.936 [2024-10-01 06:14:11.337743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:10161 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.936 [2024-10-01 06:14:11.337766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:45.936 [2024-10-01 06:14:11.350063] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e210) with pdu=0x2000198feb58 00:21:45.936 [2024-10-01 06:14:11.350291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:2821 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.936 [2024-10-01 06:14:11.350331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:45.936 [2024-10-01 06:14:11.362031] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e210) with pdu=0x2000198feb58 00:21:45.936 [2024-10-01 06:14:11.362229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:14994 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.936 [2024-10-01 06:14:11.362250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:45.936 [2024-10-01 06:14:11.373669] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e210) with pdu=0x2000198feb58 00:21:45.936 [2024-10-01 06:14:11.373858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:17769 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.936 [2024-10-01 06:14:11.373878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:45.936 [2024-10-01 06:14:11.384855] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e210) with pdu=0x2000198feb58 00:21:45.936 [2024-10-01 06:14:11.385221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:10207 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.936 [2024-10-01 06:14:11.385248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:45.936 [2024-10-01 06:14:11.396261] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e210) with pdu=0x2000198feb58 00:21:45.936 [2024-10-01 06:14:11.396500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:16345 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.936 [2024-10-01 06:14:11.396521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:45.936 [2024-10-01 06:14:11.407488] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e210) with pdu=0x2000198feb58 00:21:45.936 [2024-10-01 06:14:11.407693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:179 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.936 [2024-10-01 06:14:11.407713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:45.936 [2024-10-01 06:14:11.418565] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e210) with pdu=0x2000198feb58 00:21:45.937 [2024-10-01 06:14:11.418765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:23523 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.937 [2024-10-01 06:14:11.418786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:45.937 [2024-10-01 06:14:11.429794] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e210) with pdu=0x2000198feb58 00:21:45.937 [2024-10-01 06:14:11.430018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:19963 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.937 [2024-10-01 06:14:11.430039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:45.937 [2024-10-01 06:14:11.441015] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e210) with pdu=0x2000198feb58 00:21:45.937 [2024-10-01 06:14:11.441214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:4892 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.937 [2024-10-01 06:14:11.441235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:45.937 [2024-10-01 06:14:11.452354] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e210) with pdu=0x2000198feb58 00:21:45.937 [2024-10-01 06:14:11.452549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:7327 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.937 [2024-10-01 06:14:11.452570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:45.937 [2024-10-01 06:14:11.463632] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e210) with pdu=0x2000198feb58 00:21:45.937 [2024-10-01 06:14:11.463832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:10450 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.937 [2024-10-01 06:14:11.463853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:45.937 [2024-10-01 06:14:11.474657] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e210) with pdu=0x2000198feb58 00:21:45.937 [2024-10-01 06:14:11.474859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:13500 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.937 [2024-10-01 06:14:11.474879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:45.937 [2024-10-01 06:14:11.485916] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e210) with pdu=0x2000198feb58 00:21:45.937 [2024-10-01 06:14:11.486113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:25271 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.937 [2024-10-01 06:14:11.486134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:45.937 [2024-10-01 06:14:11.497220] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e210) with pdu=0x2000198feb58 00:21:45.937 [2024-10-01 06:14:11.497427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:21448 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.937 [2024-10-01 06:14:11.497447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:45.937 [2024-10-01 06:14:11.508443] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e210) with pdu=0x2000198feb58 00:21:45.937 [2024-10-01 06:14:11.508648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:2196 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.937 [2024-10-01 06:14:11.508669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:45.937 [2024-10-01 06:14:11.519605] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e210) with pdu=0x2000198feb58 00:21:45.937 [2024-10-01 06:14:11.519801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:4556 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.937 [2024-10-01 06:14:11.519821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:45.937 [2024-10-01 06:14:11.530887] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e210) with pdu=0x2000198feb58 00:21:45.937 [2024-10-01 06:14:11.531119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:6908 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.937 [2024-10-01 06:14:11.531140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:45.937 [2024-10-01 06:14:11.541572] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e210) with pdu=0x2000198feb58 00:21:45.937 [2024-10-01 06:14:11.541776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:6544 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:45.937 [2024-10-01 06:14:11.541796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:46.196 [2024-10-01 06:14:11.553294] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e210) with pdu=0x2000198feb58 00:21:46.196 [2024-10-01 06:14:11.553488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:5366 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.196 [2024-10-01 06:14:11.553508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:46.196 [2024-10-01 06:14:11.564576] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e210) with pdu=0x2000198feb58 00:21:46.196 [2024-10-01 06:14:11.564776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:19398 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.196 [2024-10-01 06:14:11.564796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:46.196 [2024-10-01 06:14:11.575182] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e210) with pdu=0x2000198feb58 00:21:46.196 [2024-10-01 06:14:11.575377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:10419 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.196 [2024-10-01 06:14:11.575397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:46.196 [2024-10-01 06:14:11.585892] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e210) with pdu=0x2000198feb58 00:21:46.196 [2024-10-01 06:14:11.586093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:15686 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.196 [2024-10-01 06:14:11.586114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:46.196 [2024-10-01 06:14:11.596579] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e210) with pdu=0x2000198feb58 00:21:46.196 [2024-10-01 06:14:11.596772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:2870 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.196 [2024-10-01 06:14:11.596792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:46.196 [2024-10-01 06:14:11.607087] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e210) with pdu=0x2000198feb58 00:21:46.196 [2024-10-01 06:14:11.607277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:3123 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.196 [2024-10-01 06:14:11.607297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:46.196 [2024-10-01 06:14:11.617654] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e210) with pdu=0x2000198feb58 00:21:46.196 [2024-10-01 06:14:11.617846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:5904 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.196 [2024-10-01 06:14:11.617866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:46.196 [2024-10-01 06:14:11.628259] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e210) with pdu=0x2000198feb58 00:21:46.196 [2024-10-01 06:14:11.628469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:14383 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.196 [2024-10-01 06:14:11.628506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:46.196 [2024-10-01 06:14:11.638725] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e210) with pdu=0x2000198feb58 00:21:46.196 [2024-10-01 06:14:11.638936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:10754 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.196 [2024-10-01 06:14:11.638957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:46.196 [2024-10-01 06:14:11.649365] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e210) with pdu=0x2000198feb58 00:21:46.196 [2024-10-01 06:14:11.649576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:21899 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.196 [2024-10-01 06:14:11.649597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:46.196 [2024-10-01 06:14:11.660029] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e210) with pdu=0x2000198feb58 00:21:46.196 [2024-10-01 06:14:11.660221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:10680 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.196 [2024-10-01 06:14:11.660262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:46.196 [2024-10-01 06:14:11.670551] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e210) with pdu=0x2000198feb58 00:21:46.196 [2024-10-01 06:14:11.670756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:7034 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.196 [2024-10-01 06:14:11.670776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:46.196 [2024-10-01 06:14:11.681329] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e210) with pdu=0x2000198feb58 00:21:46.196 [2024-10-01 06:14:11.681535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:18295 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.196 [2024-10-01 06:14:11.681555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:46.196 [2024-10-01 06:14:11.691808] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e210) with pdu=0x2000198feb58 00:21:46.196 [2024-10-01 06:14:11.692068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:20793 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.196 [2024-10-01 06:14:11.692089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:46.196 [2024-10-01 06:14:11.702512] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e210) with pdu=0x2000198feb58 00:21:46.196 [2024-10-01 06:14:11.702704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:17092 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.196 [2024-10-01 06:14:11.702725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:46.196 [2024-10-01 06:14:11.713185] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e210) with pdu=0x2000198feb58 00:21:46.196 [2024-10-01 06:14:11.713378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:23114 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.196 [2024-10-01 06:14:11.713398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:46.196 [2024-10-01 06:14:11.723747] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e210) with pdu=0x2000198feb58 00:21:46.196 [2024-10-01 06:14:11.724012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:16161 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.196 [2024-10-01 06:14:11.724034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:46.196 [2024-10-01 06:14:11.734521] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e210) with pdu=0x2000198feb58 00:21:46.196 [2024-10-01 06:14:11.734710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:20887 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.196 [2024-10-01 06:14:11.734730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:46.196 [2024-10-01 06:14:11.745214] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e210) with pdu=0x2000198feb58 00:21:46.196 [2024-10-01 06:14:11.745407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:24757 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.196 [2024-10-01 06:14:11.745426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:46.197 [2024-10-01 06:14:11.756029] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e210) with pdu=0x2000198feb58 00:21:46.197 [2024-10-01 06:14:11.756228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:14255 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.197 [2024-10-01 06:14:11.756266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:46.197 [2024-10-01 06:14:11.766587] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e210) with pdu=0x2000198feb58 00:21:46.197 [2024-10-01 06:14:11.766779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:11262 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.197 [2024-10-01 06:14:11.766799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:46.197 [2024-10-01 06:14:11.777199] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e210) with pdu=0x2000198feb58 00:21:46.197 [2024-10-01 06:14:11.777395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:5101 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.197 [2024-10-01 06:14:11.777414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:46.197 [2024-10-01 06:14:11.787742] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e210) with pdu=0x2000198feb58 00:21:46.197 [2024-10-01 06:14:11.787970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:21635 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.197 [2024-10-01 06:14:11.787991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:46.197 [2024-10-01 06:14:11.798594] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e210) with pdu=0x2000198feb58 00:21:46.197 [2024-10-01 06:14:11.798800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:172 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.197 [2024-10-01 06:14:11.798820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:46.197 [2024-10-01 06:14:11.809552] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e210) with pdu=0x2000198feb58 00:21:46.197 [2024-10-01 06:14:11.809758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:17022 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.197 [2024-10-01 06:14:11.809793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:46.455 [2024-10-01 06:14:11.821064] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e210) with pdu=0x2000198feb58 00:21:46.455 [2024-10-01 06:14:11.821254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:7798 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.455 [2024-10-01 06:14:11.821274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:46.455 [2024-10-01 06:14:11.831601] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e210) with pdu=0x2000198feb58 00:21:46.455 [2024-10-01 06:14:11.831795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:18322 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.455 [2024-10-01 06:14:11.831815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:46.455 [2024-10-01 06:14:11.842208] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e210) with pdu=0x2000198feb58 00:21:46.455 [2024-10-01 06:14:11.842399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:7730 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.455 [2024-10-01 06:14:11.842419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:46.455 [2024-10-01 06:14:11.852896] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e210) with pdu=0x2000198feb58 00:21:46.455 [2024-10-01 06:14:11.853097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:24188 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.455 [2024-10-01 06:14:11.853116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:46.455 [2024-10-01 06:14:11.863624] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e210) with pdu=0x2000198feb58 00:21:46.455 [2024-10-01 06:14:11.863817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:16985 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.455 [2024-10-01 06:14:11.863837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:46.455 [2024-10-01 06:14:11.874243] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e210) with pdu=0x2000198feb58 00:21:46.455 [2024-10-01 06:14:11.874433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:10682 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.455 [2024-10-01 06:14:11.874453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:46.455 [2024-10-01 06:14:11.884901] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e210) with pdu=0x2000198feb58 00:21:46.455 [2024-10-01 06:14:11.885108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:13090 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.455 [2024-10-01 06:14:11.885129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:46.455 [2024-10-01 06:14:11.895355] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e210) with pdu=0x2000198feb58 00:21:46.455 [2024-10-01 06:14:11.895553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:14662 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.455 [2024-10-01 06:14:11.895573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:46.455 [2024-10-01 06:14:11.905836] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e210) with pdu=0x2000198feb58 00:21:46.455 [2024-10-01 06:14:11.906040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:13745 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.455 [2024-10-01 06:14:11.906061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:46.455 [2024-10-01 06:14:11.916466] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e210) with pdu=0x2000198feb58 00:21:46.455 [2024-10-01 06:14:11.916668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:18239 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.455 [2024-10-01 06:14:11.916687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:46.455 [2024-10-01 06:14:11.927999] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e210) with pdu=0x2000198feb58 00:21:46.455 [2024-10-01 06:14:11.928221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:10830 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.455 [2024-10-01 06:14:11.928248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:46.455 [2024-10-01 06:14:11.940462] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e210) with pdu=0x2000198feb58 00:21:46.455 [2024-10-01 06:14:11.940660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:20378 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.455 [2024-10-01 06:14:11.940682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:46.455 [2024-10-01 06:14:11.952793] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e210) with pdu=0x2000198feb58 00:21:46.455 [2024-10-01 06:14:11.953069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:7562 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.455 [2024-10-01 06:14:11.953100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:46.455 [2024-10-01 06:14:11.964446] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e210) with pdu=0x2000198feb58 00:21:46.455 [2024-10-01 06:14:11.964646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:18133 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.455 [2024-10-01 06:14:11.964666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:46.455 [2024-10-01 06:14:11.975794] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e210) with pdu=0x2000198feb58 00:21:46.455 [2024-10-01 06:14:11.976062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:11383 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.455 [2024-10-01 06:14:11.976094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:46.455 [2024-10-01 06:14:11.987063] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e210) with pdu=0x2000198feb58 00:21:46.455 [2024-10-01 06:14:11.987258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:1957 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.455 [2024-10-01 06:14:11.987279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:46.455 [2024-10-01 06:14:11.998379] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e210) with pdu=0x2000198feb58 00:21:46.455 [2024-10-01 06:14:11.998592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:17369 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.455 [2024-10-01 06:14:11.998613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:46.455 [2024-10-01 06:14:12.009635] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e210) with pdu=0x2000198feb58 00:21:46.455 [2024-10-01 06:14:12.009834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:24485 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.455 [2024-10-01 06:14:12.009854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:46.455 [2024-10-01 06:14:12.020872] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e210) with pdu=0x2000198feb58 00:21:46.455 [2024-10-01 06:14:12.021085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:24143 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.455 [2024-10-01 06:14:12.021105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:46.455 [2024-10-01 06:14:12.032084] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e210) with pdu=0x2000198feb58 00:21:46.455 [2024-10-01 06:14:12.032342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:11013 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.455 [2024-10-01 06:14:12.032369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:46.455 [2024-10-01 06:14:12.043218] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e210) with pdu=0x2000198feb58 00:21:46.455 [2024-10-01 06:14:12.043417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:13706 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.456 [2024-10-01 06:14:12.043437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:46.456 [2024-10-01 06:14:12.054487] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e210) with pdu=0x2000198feb58 00:21:46.456 [2024-10-01 06:14:12.054699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:10049 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.456 [2024-10-01 06:14:12.054737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:46.456 [2024-10-01 06:14:12.065720] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e210) with pdu=0x2000198feb58 00:21:46.456 [2024-10-01 06:14:12.065920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:21111 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.456 [2024-10-01 06:14:12.065951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:46.714 [2024-10-01 06:14:12.077818] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e210) with pdu=0x2000198feb58 00:21:46.714 [2024-10-01 06:14:12.078022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:17608 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.714 [2024-10-01 06:14:12.078042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:46.714 [2024-10-01 06:14:12.088545] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e210) with pdu=0x2000198feb58 00:21:46.715 [2024-10-01 06:14:12.088751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:3190 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.715 [2024-10-01 06:14:12.088771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:46.715 [2024-10-01 06:14:12.099143] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e210) with pdu=0x2000198feb58 00:21:46.715 [2024-10-01 06:14:12.099340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:13232 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.715 [2024-10-01 06:14:12.099360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:46.715 20506.50 IOPS, 80.10 MiB/s [2024-10-01 06:14:12.109849] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e210) with pdu=0x2000198feb58 00:21:46.715 [2024-10-01 06:14:12.110057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:13186 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:46.715 [2024-10-01 06:14:12.110078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:21:46.715 00:21:46.715 Latency(us) 00:21:46.715 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:46.715 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:21:46.715 nvme0n1 : 2.01 20514.59 80.14 0.00 0.00 6228.02 4021.53 29669.93 00:21:46.715 =================================================================================================================== 00:21:46.715 Total : 20514.59 80.14 0.00 0.00 6228.02 4021.53 29669.93 00:21:46.715 { 00:21:46.715 "results": [ 00:21:46.715 { 00:21:46.715 "job": "nvme0n1", 00:21:46.715 "core_mask": "0x2", 00:21:46.715 "workload": "randwrite", 00:21:46.715 "status": "finished", 00:21:46.715 "queue_depth": 128, 00:21:46.715 "io_size": 4096, 00:21:46.715 "runtime": 2.005451, 00:21:46.715 "iops": 20514.587491791124, 00:21:46.715 "mibps": 80.13510738980908, 00:21:46.715 "io_failed": 0, 00:21:46.715 "io_timeout": 0, 00:21:46.715 "avg_latency_us": 6228.019449918352, 00:21:46.715 "min_latency_us": 4021.5272727272727, 00:21:46.715 "max_latency_us": 29669.934545454544 00:21:46.715 } 00:21:46.715 ], 00:21:46.715 "core_count": 1 00:21:46.715 } 00:21:46.715 06:14:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:21:46.715 06:14:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:21:46.715 06:14:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:21:46.715 | .driver_specific 00:21:46.715 | .nvme_error 00:21:46.715 | .status_code 00:21:46.715 | .command_transient_transport_error' 00:21:46.715 06:14:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:21:46.974 06:14:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 161 > 0 )) 00:21:46.974 06:14:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 94627 00:21:46.974 06:14:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 94627 ']' 00:21:46.974 06:14:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 94627 00:21:46.974 06:14:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:21:46.974 06:14:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:46.974 06:14:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 94627 00:21:46.974 06:14:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:21:46.974 06:14:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:21:46.974 06:14:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 94627' 00:21:46.974 killing process with pid 94627 00:21:46.974 Received shutdown signal, test time was about 2.000000 seconds 00:21:46.974 00:21:46.974 Latency(us) 00:21:46.974 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:46.974 =================================================================================================================== 00:21:46.974 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:46.974 06:14:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 94627 00:21:46.974 06:14:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 94627 00:21:46.974 06:14:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:21:46.974 06:14:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:21:46.974 06:14:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:21:46.974 06:14:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:21:46.974 06:14:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:21:46.974 06:14:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=94673 00:21:46.974 06:14:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 94673 /var/tmp/bperf.sock 00:21:46.974 06:14:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:21:46.974 06:14:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 94673 ']' 00:21:46.974 06:14:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:21:46.974 06:14:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:46.974 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:21:46.974 06:14:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:21:46.974 06:14:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:46.974 06:14:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:47.233 I/O size of 131072 is greater than zero copy threshold (65536). 00:21:47.233 Zero copy mechanism will not be used. 00:21:47.233 [2024-10-01 06:14:12.628450] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:21:47.233 [2024-10-01 06:14:12.628558] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94673 ] 00:21:47.233 [2024-10-01 06:14:12.762919] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:47.233 [2024-10-01 06:14:12.795852] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:21:47.233 [2024-10-01 06:14:12.822757] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:47.491 06:14:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:47.491 06:14:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:21:47.491 06:14:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:21:47.491 06:14:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:21:47.750 06:14:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:21:47.750 06:14:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:47.750 06:14:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:47.750 06:14:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:47.750 06:14:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:47.750 06:14:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:21:48.009 nvme0n1 00:21:48.009 06:14:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:21:48.009 06:14:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:48.009 06:14:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:48.010 06:14:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:48.010 06:14:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:21:48.010 06:14:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:21:48.010 I/O size of 131072 is greater than zero copy threshold (65536). 00:21:48.010 Zero copy mechanism will not be used. 00:21:48.010 Running I/O for 2 seconds... 00:21:48.010 [2024-10-01 06:14:13.576493] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:48.010 [2024-10-01 06:14:13.576766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.010 [2024-10-01 06:14:13.576793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:48.010 [2024-10-01 06:14:13.581069] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:48.010 [2024-10-01 06:14:13.581363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.010 [2024-10-01 06:14:13.581394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:48.010 [2024-10-01 06:14:13.585668] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:48.010 [2024-10-01 06:14:13.585955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.010 [2024-10-01 06:14:13.585982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:48.010 [2024-10-01 06:14:13.590213] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:48.010 [2024-10-01 06:14:13.590513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.010 [2024-10-01 06:14:13.590555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:48.010 [2024-10-01 06:14:13.594740] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:48.010 [2024-10-01 06:14:13.595035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.010 [2024-10-01 06:14:13.595062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:48.010 [2024-10-01 06:14:13.599269] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:48.010 [2024-10-01 06:14:13.599532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.010 [2024-10-01 06:14:13.599559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:48.010 [2024-10-01 06:14:13.603781] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:48.010 [2024-10-01 06:14:13.604125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.010 [2024-10-01 06:14:13.604153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:48.010 [2024-10-01 06:14:13.608341] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:48.010 [2024-10-01 06:14:13.608604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.010 [2024-10-01 06:14:13.608632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:48.010 [2024-10-01 06:14:13.612722] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:48.010 [2024-10-01 06:14:13.612794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.010 [2024-10-01 06:14:13.612815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:48.010 [2024-10-01 06:14:13.617280] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:48.010 [2024-10-01 06:14:13.617349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.010 [2024-10-01 06:14:13.617370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:48.010 [2024-10-01 06:14:13.621984] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:48.010 [2024-10-01 06:14:13.622085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.010 [2024-10-01 06:14:13.622108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:48.271 [2024-10-01 06:14:13.626982] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:48.271 [2024-10-01 06:14:13.627052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.271 [2024-10-01 06:14:13.627074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:48.271 [2024-10-01 06:14:13.631803] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:48.271 [2024-10-01 06:14:13.631942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.271 [2024-10-01 06:14:13.631981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:48.271 [2024-10-01 06:14:13.636496] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:48.271 [2024-10-01 06:14:13.636566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.271 [2024-10-01 06:14:13.636587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:48.271 [2024-10-01 06:14:13.641050] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:48.271 [2024-10-01 06:14:13.641122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.271 [2024-10-01 06:14:13.641143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:48.271 [2024-10-01 06:14:13.645461] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:48.271 [2024-10-01 06:14:13.645534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.271 [2024-10-01 06:14:13.645555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:48.271 [2024-10-01 06:14:13.649962] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:48.271 [2024-10-01 06:14:13.650033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.271 [2024-10-01 06:14:13.650054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:48.271 [2024-10-01 06:14:13.654483] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:48.271 [2024-10-01 06:14:13.654559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.271 [2024-10-01 06:14:13.654580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:48.271 [2024-10-01 06:14:13.658908] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:48.271 [2024-10-01 06:14:13.658994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.271 [2024-10-01 06:14:13.659015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:48.271 [2024-10-01 06:14:13.663360] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:48.271 [2024-10-01 06:14:13.663434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.271 [2024-10-01 06:14:13.663455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:48.271 [2024-10-01 06:14:13.667854] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:48.271 [2024-10-01 06:14:13.667975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.271 [2024-10-01 06:14:13.667997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:48.271 [2024-10-01 06:14:13.672406] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:48.271 [2024-10-01 06:14:13.672478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.271 [2024-10-01 06:14:13.672498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:48.271 [2024-10-01 06:14:13.676973] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:48.271 [2024-10-01 06:14:13.677046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.271 [2024-10-01 06:14:13.677066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:48.271 [2024-10-01 06:14:13.681401] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:48.271 [2024-10-01 06:14:13.681475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.271 [2024-10-01 06:14:13.681495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:48.271 [2024-10-01 06:14:13.685847] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:48.271 [2024-10-01 06:14:13.685946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.271 [2024-10-01 06:14:13.685968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:48.271 [2024-10-01 06:14:13.690252] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:48.271 [2024-10-01 06:14:13.690341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.271 [2024-10-01 06:14:13.690362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:48.271 [2024-10-01 06:14:13.694729] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:48.271 [2024-10-01 06:14:13.694800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.271 [2024-10-01 06:14:13.694820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:48.271 [2024-10-01 06:14:13.699257] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:48.271 [2024-10-01 06:14:13.699327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.271 [2024-10-01 06:14:13.699348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:48.271 [2024-10-01 06:14:13.703699] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:48.271 [2024-10-01 06:14:13.703771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.271 [2024-10-01 06:14:13.703792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:48.271 [2024-10-01 06:14:13.708329] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:48.271 [2024-10-01 06:14:13.708399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.271 [2024-10-01 06:14:13.708420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:48.271 [2024-10-01 06:14:13.712841] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:48.271 [2024-10-01 06:14:13.712914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.271 [2024-10-01 06:14:13.712935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:48.271 [2024-10-01 06:14:13.717820] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:48.271 [2024-10-01 06:14:13.717890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.271 [2024-10-01 06:14:13.717943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:48.271 [2024-10-01 06:14:13.722738] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:48.271 [2024-10-01 06:14:13.722809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.271 [2024-10-01 06:14:13.722830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:48.272 [2024-10-01 06:14:13.727588] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:48.272 [2024-10-01 06:14:13.727660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.272 [2024-10-01 06:14:13.727683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:48.272 [2024-10-01 06:14:13.732740] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:48.272 [2024-10-01 06:14:13.732813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.272 [2024-10-01 06:14:13.732836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:48.272 [2024-10-01 06:14:13.737726] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:48.272 [2024-10-01 06:14:13.737797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.272 [2024-10-01 06:14:13.737819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:48.272 [2024-10-01 06:14:13.742705] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:48.272 [2024-10-01 06:14:13.742776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.272 [2024-10-01 06:14:13.742797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:48.272 [2024-10-01 06:14:13.747673] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:48.272 [2024-10-01 06:14:13.747747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.272 [2024-10-01 06:14:13.747769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:48.272 [2024-10-01 06:14:13.752747] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:48.272 [2024-10-01 06:14:13.752821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.272 [2024-10-01 06:14:13.752842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:48.272 [2024-10-01 06:14:13.757651] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:48.272 [2024-10-01 06:14:13.757724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.272 [2024-10-01 06:14:13.757745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:48.272 [2024-10-01 06:14:13.762566] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:48.272 [2024-10-01 06:14:13.762638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.272 [2024-10-01 06:14:13.762660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:48.272 [2024-10-01 06:14:13.767373] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:48.272 [2024-10-01 06:14:13.767443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.272 [2024-10-01 06:14:13.767464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:48.272 [2024-10-01 06:14:13.772095] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:48.272 [2024-10-01 06:14:13.772172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.272 [2024-10-01 06:14:13.772195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:48.272 [2024-10-01 06:14:13.777027] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:48.272 [2024-10-01 06:14:13.777098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.272 [2024-10-01 06:14:13.777120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:48.272 [2024-10-01 06:14:13.781603] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:48.272 [2024-10-01 06:14:13.781673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.272 [2024-10-01 06:14:13.781694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:48.272 [2024-10-01 06:14:13.786233] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:48.272 [2024-10-01 06:14:13.786308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.272 [2024-10-01 06:14:13.786329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:48.272 [2024-10-01 06:14:13.790887] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:48.272 [2024-10-01 06:14:13.790971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.272 [2024-10-01 06:14:13.790994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:48.272 [2024-10-01 06:14:13.795467] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:48.272 [2024-10-01 06:14:13.795541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.272 [2024-10-01 06:14:13.795563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:48.272 [2024-10-01 06:14:13.800041] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:48.272 [2024-10-01 06:14:13.800116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.272 [2024-10-01 06:14:13.800139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:48.272 [2024-10-01 06:14:13.804804] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:48.272 [2024-10-01 06:14:13.804876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.272 [2024-10-01 06:14:13.804897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:48.272 [2024-10-01 06:14:13.809460] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:48.272 [2024-10-01 06:14:13.809533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.272 [2024-10-01 06:14:13.809553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:48.272 [2024-10-01 06:14:13.814030] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:48.272 [2024-10-01 06:14:13.814101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.272 [2024-10-01 06:14:13.814122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:48.272 [2024-10-01 06:14:13.818802] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:48.272 [2024-10-01 06:14:13.818874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.272 [2024-10-01 06:14:13.818896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:48.272 [2024-10-01 06:14:13.823466] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:48.272 [2024-10-01 06:14:13.823537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.272 [2024-10-01 06:14:13.823558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:48.272 [2024-10-01 06:14:13.828038] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:48.272 [2024-10-01 06:14:13.828113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.272 [2024-10-01 06:14:13.828135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:48.272 [2024-10-01 06:14:13.832823] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:48.272 [2024-10-01 06:14:13.832913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.272 [2024-10-01 06:14:13.832950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:48.272 [2024-10-01 06:14:13.837639] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:48.272 [2024-10-01 06:14:13.837711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.272 [2024-10-01 06:14:13.837732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:48.272 [2024-10-01 06:14:13.842201] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:48.272 [2024-10-01 06:14:13.842271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.272 [2024-10-01 06:14:13.842292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:48.272 [2024-10-01 06:14:13.846765] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:48.272 [2024-10-01 06:14:13.846837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.272 [2024-10-01 06:14:13.846858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:48.272 [2024-10-01 06:14:13.851558] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:48.272 [2024-10-01 06:14:13.851634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.272 [2024-10-01 06:14:13.851656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:48.273 [2024-10-01 06:14:13.856434] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:48.273 [2024-10-01 06:14:13.856505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.273 [2024-10-01 06:14:13.856526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:48.273 [2024-10-01 06:14:13.861120] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:48.273 [2024-10-01 06:14:13.861191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.273 [2024-10-01 06:14:13.861212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:48.273 [2024-10-01 06:14:13.865715] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:48.273 [2024-10-01 06:14:13.865786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.273 [2024-10-01 06:14:13.865808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:48.273 [2024-10-01 06:14:13.870386] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:48.273 [2024-10-01 06:14:13.870460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.273 [2024-10-01 06:14:13.870481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:48.273 [2024-10-01 06:14:13.875044] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:48.273 [2024-10-01 06:14:13.875114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.273 [2024-10-01 06:14:13.875136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:48.273 [2024-10-01 06:14:13.879783] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:48.273 [2024-10-01 06:14:13.879853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.273 [2024-10-01 06:14:13.879875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:48.273 [2024-10-01 06:14:13.885094] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:48.273 [2024-10-01 06:14:13.885169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.273 [2024-10-01 06:14:13.885191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:48.533 [2024-10-01 06:14:13.889889] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:48.533 [2024-10-01 06:14:13.889979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.533 [2024-10-01 06:14:13.890000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:48.533 [2024-10-01 06:14:13.894951] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:48.533 [2024-10-01 06:14:13.895061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.533 [2024-10-01 06:14:13.895083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:48.533 [2024-10-01 06:14:13.899559] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:48.533 [2024-10-01 06:14:13.899631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.533 [2024-10-01 06:14:13.899652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:48.533 [2024-10-01 06:14:13.904314] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:48.533 [2024-10-01 06:14:13.904394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.533 [2024-10-01 06:14:13.904415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:48.533 [2024-10-01 06:14:13.908764] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:48.533 [2024-10-01 06:14:13.908845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.533 [2024-10-01 06:14:13.908866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:48.533 [2024-10-01 06:14:13.913326] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:48.533 [2024-10-01 06:14:13.913398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.533 [2024-10-01 06:14:13.913419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:48.533 [2024-10-01 06:14:13.917846] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:48.533 [2024-10-01 06:14:13.917941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.533 [2024-10-01 06:14:13.917963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:48.533 [2024-10-01 06:14:13.922283] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:48.533 [2024-10-01 06:14:13.922355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.533 [2024-10-01 06:14:13.922376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:48.533 [2024-10-01 06:14:13.926587] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:48.533 [2024-10-01 06:14:13.926660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.533 [2024-10-01 06:14:13.926681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:48.533 [2024-10-01 06:14:13.931124] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:48.533 [2024-10-01 06:14:13.931191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.533 [2024-10-01 06:14:13.931212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:48.533 [2024-10-01 06:14:13.935642] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:48.533 [2024-10-01 06:14:13.935723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.533 [2024-10-01 06:14:13.935744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:48.533 [2024-10-01 06:14:13.940173] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:48.533 [2024-10-01 06:14:13.940229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.533 [2024-10-01 06:14:13.940251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:48.533 [2024-10-01 06:14:13.944558] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:48.533 [2024-10-01 06:14:13.944627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.533 [2024-10-01 06:14:13.944648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:48.533 [2024-10-01 06:14:13.948975] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:48.533 [2024-10-01 06:14:13.949044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.533 [2024-10-01 06:14:13.949065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:48.533 [2024-10-01 06:14:13.953430] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:48.533 [2024-10-01 06:14:13.953499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.533 [2024-10-01 06:14:13.953520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:48.533 [2024-10-01 06:14:13.957918] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:48.533 [2024-10-01 06:14:13.957987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.533 [2024-10-01 06:14:13.958008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:48.533 [2024-10-01 06:14:13.962301] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:48.533 [2024-10-01 06:14:13.962373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.533 [2024-10-01 06:14:13.962394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:48.533 [2024-10-01 06:14:13.966882] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:48.533 [2024-10-01 06:14:13.966965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.533 [2024-10-01 06:14:13.966987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:48.533 [2024-10-01 06:14:13.971736] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:48.533 [2024-10-01 06:14:13.971805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.533 [2024-10-01 06:14:13.971827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:48.533 [2024-10-01 06:14:13.976758] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:48.533 [2024-10-01 06:14:13.976828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.533 [2024-10-01 06:14:13.976849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:48.533 [2024-10-01 06:14:13.981736] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:48.533 [2024-10-01 06:14:13.981835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.533 [2024-10-01 06:14:13.981857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:48.534 [2024-10-01 06:14:13.987130] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:48.534 [2024-10-01 06:14:13.987205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.534 [2024-10-01 06:14:13.987244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:48.534 [2024-10-01 06:14:13.992355] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:48.534 [2024-10-01 06:14:13.992427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.534 [2024-10-01 06:14:13.992448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:48.534 [2024-10-01 06:14:13.997185] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:48.534 [2024-10-01 06:14:13.997289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.534 [2024-10-01 06:14:13.997326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:48.534 [2024-10-01 06:14:14.002054] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:48.534 [2024-10-01 06:14:14.002127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.534 [2024-10-01 06:14:14.002150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:48.534 [2024-10-01 06:14:14.006802] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:48.534 [2024-10-01 06:14:14.006875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.534 [2024-10-01 06:14:14.006895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:48.534 [2024-10-01 06:14:14.011566] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:48.534 [2024-10-01 06:14:14.011650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.534 [2024-10-01 06:14:14.011670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:48.534 [2024-10-01 06:14:14.016256] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:48.534 [2024-10-01 06:14:14.016341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.534 [2024-10-01 06:14:14.016362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:48.534 [2024-10-01 06:14:14.020750] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:48.534 [2024-10-01 06:14:14.020822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.534 [2024-10-01 06:14:14.020843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:48.534 [2024-10-01 06:14:14.025318] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:48.534 [2024-10-01 06:14:14.025389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.534 [2024-10-01 06:14:14.025410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:48.534 [2024-10-01 06:14:14.029808] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:48.534 [2024-10-01 06:14:14.029876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.534 [2024-10-01 06:14:14.029897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:48.534 [2024-10-01 06:14:14.034289] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:48.534 [2024-10-01 06:14:14.034377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.534 [2024-10-01 06:14:14.034398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:48.534 [2024-10-01 06:14:14.038744] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:48.534 [2024-10-01 06:14:14.038816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.534 [2024-10-01 06:14:14.038837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:48.534 [2024-10-01 06:14:14.043233] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:48.534 [2024-10-01 06:14:14.043306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.534 [2024-10-01 06:14:14.043327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:48.534 [2024-10-01 06:14:14.047680] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:48.534 [2024-10-01 06:14:14.047751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.534 [2024-10-01 06:14:14.047772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:48.534 [2024-10-01 06:14:14.052217] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:48.534 [2024-10-01 06:14:14.052321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.534 [2024-10-01 06:14:14.052343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:48.534 [2024-10-01 06:14:14.056687] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:48.534 [2024-10-01 06:14:14.056759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.534 [2024-10-01 06:14:14.056780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:48.534 [2024-10-01 06:14:14.061170] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:48.534 [2024-10-01 06:14:14.061240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.534 [2024-10-01 06:14:14.061261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:48.534 [2024-10-01 06:14:14.065570] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:48.534 [2024-10-01 06:14:14.065666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.534 [2024-10-01 06:14:14.065689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:48.534 [2024-10-01 06:14:14.070099] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:48.534 [2024-10-01 06:14:14.070173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.534 [2024-10-01 06:14:14.070196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:48.534 [2024-10-01 06:14:14.074542] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:48.534 [2024-10-01 06:14:14.074614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.534 [2024-10-01 06:14:14.074635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:48.534 [2024-10-01 06:14:14.078979] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:48.534 [2024-10-01 06:14:14.079059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.534 [2024-10-01 06:14:14.079080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:48.534 [2024-10-01 06:14:14.083314] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:48.534 [2024-10-01 06:14:14.083395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.534 [2024-10-01 06:14:14.083416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:48.534 [2024-10-01 06:14:14.087706] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:48.534 [2024-10-01 06:14:14.087777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.534 [2024-10-01 06:14:14.087798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:48.534 [2024-10-01 06:14:14.092318] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:48.534 [2024-10-01 06:14:14.092389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.534 [2024-10-01 06:14:14.092410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:48.534 [2024-10-01 06:14:14.096804] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:48.534 [2024-10-01 06:14:14.096875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.534 [2024-10-01 06:14:14.096896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:48.534 [2024-10-01 06:14:14.101266] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:48.534 [2024-10-01 06:14:14.101339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.534 [2024-10-01 06:14:14.101360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:48.534 [2024-10-01 06:14:14.105689] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:48.534 [2024-10-01 06:14:14.105760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.534 [2024-10-01 06:14:14.105781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:48.534 [2024-10-01 06:14:14.110161] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:48.535 [2024-10-01 06:14:14.110234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.535 [2024-10-01 06:14:14.110255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:48.535 [2024-10-01 06:14:14.114604] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:48.535 [2024-10-01 06:14:14.114676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.535 [2024-10-01 06:14:14.114697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:48.535 [2024-10-01 06:14:14.119146] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:48.535 [2024-10-01 06:14:14.119217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.535 [2024-10-01 06:14:14.119238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:48.535 [2024-10-01 06:14:14.123462] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:48.535 [2024-10-01 06:14:14.123539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.535 [2024-10-01 06:14:14.123560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:48.535 [2024-10-01 06:14:14.127958] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:48.535 [2024-10-01 06:14:14.128030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.535 [2024-10-01 06:14:14.128052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:48.535 [2024-10-01 06:14:14.132424] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:48.535 [2024-10-01 06:14:14.132496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.535 [2024-10-01 06:14:14.132517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:48.535 [2024-10-01 06:14:14.136884] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:48.535 [2024-10-01 06:14:14.136963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.535 [2024-10-01 06:14:14.136983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:48.535 [2024-10-01 06:14:14.141373] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:48.535 [2024-10-01 06:14:14.141444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.535 [2024-10-01 06:14:14.141465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:48.535 [2024-10-01 06:14:14.146255] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:48.535 [2024-10-01 06:14:14.146341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.535 [2024-10-01 06:14:14.146363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:48.795 [2024-10-01 06:14:14.150959] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:48.795 [2024-10-01 06:14:14.151029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.795 [2024-10-01 06:14:14.151050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:48.795 [2024-10-01 06:14:14.155847] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:48.795 [2024-10-01 06:14:14.155984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.795 [2024-10-01 06:14:14.156006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:48.795 [2024-10-01 06:14:14.160498] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:48.795 [2024-10-01 06:14:14.160570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.795 [2024-10-01 06:14:14.160590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:48.795 [2024-10-01 06:14:14.165017] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:48.795 [2024-10-01 06:14:14.165087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.795 [2024-10-01 06:14:14.165107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:48.795 [2024-10-01 06:14:14.169487] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:48.795 [2024-10-01 06:14:14.169567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.795 [2024-10-01 06:14:14.169587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:48.795 [2024-10-01 06:14:14.174088] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:48.795 [2024-10-01 06:14:14.174162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.795 [2024-10-01 06:14:14.174184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:48.795 [2024-10-01 06:14:14.178536] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:48.795 [2024-10-01 06:14:14.178608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.795 [2024-10-01 06:14:14.178629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:48.795 [2024-10-01 06:14:14.183050] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:48.795 [2024-10-01 06:14:14.183121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.795 [2024-10-01 06:14:14.183142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:48.795 [2024-10-01 06:14:14.187425] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:48.795 [2024-10-01 06:14:14.187493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.795 [2024-10-01 06:14:14.187514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:48.795 [2024-10-01 06:14:14.191856] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:48.795 [2024-10-01 06:14:14.191968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.795 [2024-10-01 06:14:14.191992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:48.795 [2024-10-01 06:14:14.196427] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:48.795 [2024-10-01 06:14:14.196499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.795 [2024-10-01 06:14:14.196520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:48.795 [2024-10-01 06:14:14.201026] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:48.795 [2024-10-01 06:14:14.201099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.795 [2024-10-01 06:14:14.201120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:48.795 [2024-10-01 06:14:14.205585] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:48.795 [2024-10-01 06:14:14.205664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.795 [2024-10-01 06:14:14.205685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:48.795 [2024-10-01 06:14:14.210157] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:48.795 [2024-10-01 06:14:14.210227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.795 [2024-10-01 06:14:14.210249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:48.795 [2024-10-01 06:14:14.214654] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:48.795 [2024-10-01 06:14:14.214726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.795 [2024-10-01 06:14:14.214747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:48.795 [2024-10-01 06:14:14.219174] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:48.795 [2024-10-01 06:14:14.219246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.795 [2024-10-01 06:14:14.219267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:48.795 [2024-10-01 06:14:14.223622] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:48.795 [2024-10-01 06:14:14.223693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.795 [2024-10-01 06:14:14.223713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:48.795 [2024-10-01 06:14:14.228188] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:48.795 [2024-10-01 06:14:14.228293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.795 [2024-10-01 06:14:14.228314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:48.795 [2024-10-01 06:14:14.232834] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:48.795 [2024-10-01 06:14:14.232904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.795 [2024-10-01 06:14:14.232925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:48.795 [2024-10-01 06:14:14.237302] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:48.795 [2024-10-01 06:14:14.237374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.795 [2024-10-01 06:14:14.237395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:48.795 [2024-10-01 06:14:14.241813] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:48.795 [2024-10-01 06:14:14.241885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.795 [2024-10-01 06:14:14.241905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:48.795 [2024-10-01 06:14:14.246215] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:48.795 [2024-10-01 06:14:14.246302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.795 [2024-10-01 06:14:14.246324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:48.795 [2024-10-01 06:14:14.250638] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:48.795 [2024-10-01 06:14:14.250706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.795 [2024-10-01 06:14:14.250727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:48.795 [2024-10-01 06:14:14.255258] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:48.795 [2024-10-01 06:14:14.255347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.795 [2024-10-01 06:14:14.255367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:48.795 [2024-10-01 06:14:14.259803] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:48.795 [2024-10-01 06:14:14.259875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.795 [2024-10-01 06:14:14.259948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:48.795 [2024-10-01 06:14:14.264377] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:48.795 [2024-10-01 06:14:14.264449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.795 [2024-10-01 06:14:14.264470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:48.795 [2024-10-01 06:14:14.268886] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:48.796 [2024-10-01 06:14:14.268966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.796 [2024-10-01 06:14:14.268998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:48.796 [2024-10-01 06:14:14.273336] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:48.796 [2024-10-01 06:14:14.273409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.796 [2024-10-01 06:14:14.273430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:48.796 [2024-10-01 06:14:14.278043] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:48.796 [2024-10-01 06:14:14.278113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.796 [2024-10-01 06:14:14.278133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:48.796 [2024-10-01 06:14:14.282466] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:48.796 [2024-10-01 06:14:14.282535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.796 [2024-10-01 06:14:14.282556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:48.796 [2024-10-01 06:14:14.286886] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:48.796 [2024-10-01 06:14:14.286965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.796 [2024-10-01 06:14:14.286985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:48.796 [2024-10-01 06:14:14.291286] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:48.796 [2024-10-01 06:14:14.291358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.796 [2024-10-01 06:14:14.291379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:48.796 [2024-10-01 06:14:14.295731] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:48.796 [2024-10-01 06:14:14.295804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.796 [2024-10-01 06:14:14.295825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:48.796 [2024-10-01 06:14:14.300425] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:48.796 [2024-10-01 06:14:14.300507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.796 [2024-10-01 06:14:14.300528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:48.796 [2024-10-01 06:14:14.304876] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:48.796 [2024-10-01 06:14:14.304974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.796 [2024-10-01 06:14:14.304995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:48.796 [2024-10-01 06:14:14.309296] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:48.796 [2024-10-01 06:14:14.309368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.796 [2024-10-01 06:14:14.309388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:48.796 [2024-10-01 06:14:14.313919] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:48.796 [2024-10-01 06:14:14.314000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.796 [2024-10-01 06:14:14.314021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:48.796 [2024-10-01 06:14:14.318494] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:48.796 [2024-10-01 06:14:14.318566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.796 [2024-10-01 06:14:14.318587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:48.796 [2024-10-01 06:14:14.322910] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:48.796 [2024-10-01 06:14:14.322981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.796 [2024-10-01 06:14:14.323002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:48.796 [2024-10-01 06:14:14.327264] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:48.796 [2024-10-01 06:14:14.327336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.796 [2024-10-01 06:14:14.327356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:48.796 [2024-10-01 06:14:14.331641] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:48.796 [2024-10-01 06:14:14.331713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.796 [2024-10-01 06:14:14.331734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:48.796 [2024-10-01 06:14:14.336164] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:48.796 [2024-10-01 06:14:14.336251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.796 [2024-10-01 06:14:14.336273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:48.796 [2024-10-01 06:14:14.340591] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:48.796 [2024-10-01 06:14:14.340664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.796 [2024-10-01 06:14:14.340684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:48.796 [2024-10-01 06:14:14.345082] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:48.796 [2024-10-01 06:14:14.345154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.796 [2024-10-01 06:14:14.345175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:48.796 [2024-10-01 06:14:14.349458] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:48.796 [2024-10-01 06:14:14.349541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.796 [2024-10-01 06:14:14.349561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:48.796 [2024-10-01 06:14:14.353988] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:48.796 [2024-10-01 06:14:14.354067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.796 [2024-10-01 06:14:14.354088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:48.796 [2024-10-01 06:14:14.358426] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:48.796 [2024-10-01 06:14:14.358498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.796 [2024-10-01 06:14:14.358518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:48.796 [2024-10-01 06:14:14.362893] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:48.796 [2024-10-01 06:14:14.362986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.796 [2024-10-01 06:14:14.363006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:48.796 [2024-10-01 06:14:14.367248] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:48.796 [2024-10-01 06:14:14.367331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.796 [2024-10-01 06:14:14.367352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:48.796 [2024-10-01 06:14:14.371634] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:48.796 [2024-10-01 06:14:14.371706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.796 [2024-10-01 06:14:14.371726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:48.796 [2024-10-01 06:14:14.376162] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:48.796 [2024-10-01 06:14:14.376235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.796 [2024-10-01 06:14:14.376256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:48.796 [2024-10-01 06:14:14.380504] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:48.796 [2024-10-01 06:14:14.380577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.796 [2024-10-01 06:14:14.380597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:48.796 [2024-10-01 06:14:14.384911] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:48.796 [2024-10-01 06:14:14.384997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.796 [2024-10-01 06:14:14.385017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:48.796 [2024-10-01 06:14:14.389320] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:48.796 [2024-10-01 06:14:14.389393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.797 [2024-10-01 06:14:14.389414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:48.797 [2024-10-01 06:14:14.393761] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:48.797 [2024-10-01 06:14:14.393830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.797 [2024-10-01 06:14:14.393851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:48.797 [2024-10-01 06:14:14.398220] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:48.797 [2024-10-01 06:14:14.398292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.797 [2024-10-01 06:14:14.398312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:48.797 [2024-10-01 06:14:14.402562] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:48.797 [2024-10-01 06:14:14.402634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.797 [2024-10-01 06:14:14.402654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:48.797 [2024-10-01 06:14:14.407401] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:48.797 [2024-10-01 06:14:14.407506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:48.797 [2024-10-01 06:14:14.407527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:49.057 [2024-10-01 06:14:14.412362] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:49.057 [2024-10-01 06:14:14.412433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.057 [2024-10-01 06:14:14.412454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:49.057 [2024-10-01 06:14:14.417261] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:49.057 [2024-10-01 06:14:14.417333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.057 [2024-10-01 06:14:14.417354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:49.057 [2024-10-01 06:14:14.421647] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:49.057 [2024-10-01 06:14:14.421718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.057 [2024-10-01 06:14:14.421739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:49.057 [2024-10-01 06:14:14.426113] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:49.057 [2024-10-01 06:14:14.426181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.057 [2024-10-01 06:14:14.426202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:49.057 [2024-10-01 06:14:14.430613] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:49.057 [2024-10-01 06:14:14.430681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.057 [2024-10-01 06:14:14.430702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:49.057 [2024-10-01 06:14:14.435107] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:49.057 [2024-10-01 06:14:14.435178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.057 [2024-10-01 06:14:14.435199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:49.057 [2024-10-01 06:14:14.439522] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:49.057 [2024-10-01 06:14:14.439594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.057 [2024-10-01 06:14:14.439615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:49.057 [2024-10-01 06:14:14.444030] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:49.057 [2024-10-01 06:14:14.444108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.057 [2024-10-01 06:14:14.444130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:49.057 [2024-10-01 06:14:14.448459] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:49.057 [2024-10-01 06:14:14.448531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.057 [2024-10-01 06:14:14.448551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:49.057 [2024-10-01 06:14:14.452892] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:49.057 [2024-10-01 06:14:14.453002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.057 [2024-10-01 06:14:14.453035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:49.057 [2024-10-01 06:14:14.457435] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:49.057 [2024-10-01 06:14:14.457517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.057 [2024-10-01 06:14:14.457539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:49.057 [2024-10-01 06:14:14.462026] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:49.057 [2024-10-01 06:14:14.462098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.057 [2024-10-01 06:14:14.462120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:49.057 [2024-10-01 06:14:14.466438] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:49.057 [2024-10-01 06:14:14.466517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.057 [2024-10-01 06:14:14.466537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:49.057 [2024-10-01 06:14:14.470950] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:49.057 [2024-10-01 06:14:14.471018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.057 [2024-10-01 06:14:14.471038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:49.057 [2024-10-01 06:14:14.475324] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:49.057 [2024-10-01 06:14:14.475394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.057 [2024-10-01 06:14:14.475415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:49.057 [2024-10-01 06:14:14.479861] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:49.057 [2024-10-01 06:14:14.479968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.057 [2024-10-01 06:14:14.479989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:49.058 [2024-10-01 06:14:14.484303] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:49.058 [2024-10-01 06:14:14.484375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.058 [2024-10-01 06:14:14.484395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:49.058 [2024-10-01 06:14:14.488682] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:49.058 [2024-10-01 06:14:14.488753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.058 [2024-10-01 06:14:14.488773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:49.058 [2024-10-01 06:14:14.493121] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:49.058 [2024-10-01 06:14:14.493190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.058 [2024-10-01 06:14:14.493212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:49.058 [2024-10-01 06:14:14.497525] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:49.058 [2024-10-01 06:14:14.497603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.058 [2024-10-01 06:14:14.497623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:49.058 [2024-10-01 06:14:14.502017] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:49.058 [2024-10-01 06:14:14.502100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.058 [2024-10-01 06:14:14.502120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:49.058 [2024-10-01 06:14:14.506445] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:49.058 [2024-10-01 06:14:14.506514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.058 [2024-10-01 06:14:14.506535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:49.058 [2024-10-01 06:14:14.510868] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:49.058 [2024-10-01 06:14:14.510958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.058 [2024-10-01 06:14:14.510979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:49.058 [2024-10-01 06:14:14.515350] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:49.058 [2024-10-01 06:14:14.515419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.058 [2024-10-01 06:14:14.515440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:49.058 [2024-10-01 06:14:14.519769] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:49.058 [2024-10-01 06:14:14.519847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.058 [2024-10-01 06:14:14.519868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:49.058 [2024-10-01 06:14:14.524359] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:49.058 [2024-10-01 06:14:14.524438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.058 [2024-10-01 06:14:14.524459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:49.058 [2024-10-01 06:14:14.528724] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:49.058 [2024-10-01 06:14:14.528797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.058 [2024-10-01 06:14:14.528818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:49.058 [2024-10-01 06:14:14.533230] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:49.058 [2024-10-01 06:14:14.533327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.058 [2024-10-01 06:14:14.533347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:49.058 [2024-10-01 06:14:14.537670] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:49.058 [2024-10-01 06:14:14.537738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.058 [2024-10-01 06:14:14.537759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:49.058 [2024-10-01 06:14:14.542174] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:49.058 [2024-10-01 06:14:14.542245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.058 [2024-10-01 06:14:14.542266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:49.058 [2024-10-01 06:14:14.546531] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:49.058 [2024-10-01 06:14:14.546601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.058 [2024-10-01 06:14:14.546622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:49.058 [2024-10-01 06:14:14.551017] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:49.058 [2024-10-01 06:14:14.551089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.058 [2024-10-01 06:14:14.551110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:49.058 [2024-10-01 06:14:14.555426] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:49.058 [2024-10-01 06:14:14.555495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.058 [2024-10-01 06:14:14.555516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:49.058 [2024-10-01 06:14:14.559992] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:49.058 [2024-10-01 06:14:14.560065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.058 [2024-10-01 06:14:14.560088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:49.058 [2024-10-01 06:14:14.564414] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:49.058 [2024-10-01 06:14:14.564486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.058 [2024-10-01 06:14:14.564507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:49.058 6728.00 IOPS, 841.00 MiB/s [2024-10-01 06:14:14.570396] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:49.058 [2024-10-01 06:14:14.570468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.058 [2024-10-01 06:14:14.570489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:49.058 [2024-10-01 06:14:14.574916] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:49.058 [2024-10-01 06:14:14.574983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.058 [2024-10-01 06:14:14.575004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:49.058 [2024-10-01 06:14:14.579327] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:49.058 [2024-10-01 06:14:14.579398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.058 [2024-10-01 06:14:14.579419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:49.058 [2024-10-01 06:14:14.583832] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:49.058 [2024-10-01 06:14:14.583953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.058 [2024-10-01 06:14:14.583976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:49.058 [2024-10-01 06:14:14.588419] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:49.058 [2024-10-01 06:14:14.588500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.058 [2024-10-01 06:14:14.588520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:49.058 [2024-10-01 06:14:14.592814] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:49.058 [2024-10-01 06:14:14.592893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.058 [2024-10-01 06:14:14.592931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:49.058 [2024-10-01 06:14:14.597242] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:49.058 [2024-10-01 06:14:14.597329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.058 [2024-10-01 06:14:14.597350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:49.058 [2024-10-01 06:14:14.601638] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:49.058 [2024-10-01 06:14:14.601719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.058 [2024-10-01 06:14:14.601740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:49.058 [2024-10-01 06:14:14.606118] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:49.059 [2024-10-01 06:14:14.606191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.059 [2024-10-01 06:14:14.606211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:49.059 [2024-10-01 06:14:14.610464] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:49.059 [2024-10-01 06:14:14.610534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.059 [2024-10-01 06:14:14.610554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:49.059 [2024-10-01 06:14:14.614959] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:49.059 [2024-10-01 06:14:14.615030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.059 [2024-10-01 06:14:14.615051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:49.059 [2024-10-01 06:14:14.619348] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:49.059 [2024-10-01 06:14:14.619423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.059 [2024-10-01 06:14:14.619443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:49.059 [2024-10-01 06:14:14.623684] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:49.059 [2024-10-01 06:14:14.623766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.059 [2024-10-01 06:14:14.623786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:49.059 [2024-10-01 06:14:14.628240] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:49.059 [2024-10-01 06:14:14.628340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.059 [2024-10-01 06:14:14.628361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:49.059 [2024-10-01 06:14:14.632685] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:49.059 [2024-10-01 06:14:14.632753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.059 [2024-10-01 06:14:14.632773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:49.059 [2024-10-01 06:14:14.637259] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:49.059 [2024-10-01 06:14:14.637346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.059 [2024-10-01 06:14:14.637367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:49.059 [2024-10-01 06:14:14.641720] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:49.059 [2024-10-01 06:14:14.641788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.059 [2024-10-01 06:14:14.641809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:49.059 [2024-10-01 06:14:14.646216] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:49.059 [2024-10-01 06:14:14.646287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.059 [2024-10-01 06:14:14.646323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:49.059 [2024-10-01 06:14:14.650655] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:49.059 [2024-10-01 06:14:14.650727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.059 [2024-10-01 06:14:14.650747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:49.059 [2024-10-01 06:14:14.655225] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:49.059 [2024-10-01 06:14:14.655293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.059 [2024-10-01 06:14:14.655313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:49.059 [2024-10-01 06:14:14.659686] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:49.059 [2024-10-01 06:14:14.659758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.059 [2024-10-01 06:14:14.659778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:49.059 [2024-10-01 06:14:14.664246] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:49.059 [2024-10-01 06:14:14.664350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.059 [2024-10-01 06:14:14.664371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:49.059 [2024-10-01 06:14:14.669026] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:49.059 [2024-10-01 06:14:14.669113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.059 [2024-10-01 06:14:14.669135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:49.319 [2024-10-01 06:14:14.673846] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:49.319 [2024-10-01 06:14:14.673934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.319 [2024-10-01 06:14:14.673967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:49.319 [2024-10-01 06:14:14.678743] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:49.319 [2024-10-01 06:14:14.678825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.319 [2024-10-01 06:14:14.678846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:49.319 [2024-10-01 06:14:14.683241] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:49.319 [2024-10-01 06:14:14.683314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.319 [2024-10-01 06:14:14.683334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:49.319 [2024-10-01 06:14:14.687656] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:49.319 [2024-10-01 06:14:14.687725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.319 [2024-10-01 06:14:14.687746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:49.319 [2024-10-01 06:14:14.692147] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:49.320 [2024-10-01 06:14:14.692234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.320 [2024-10-01 06:14:14.692285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:49.320 [2024-10-01 06:14:14.696600] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:49.320 [2024-10-01 06:14:14.696667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.320 [2024-10-01 06:14:14.696688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:49.320 [2024-10-01 06:14:14.701053] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:49.320 [2024-10-01 06:14:14.701126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.320 [2024-10-01 06:14:14.701147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:49.320 [2024-10-01 06:14:14.705452] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:49.320 [2024-10-01 06:14:14.705525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.320 [2024-10-01 06:14:14.705545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:49.320 [2024-10-01 06:14:14.709891] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:49.320 [2024-10-01 06:14:14.709989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.320 [2024-10-01 06:14:14.710010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:49.320 [2024-10-01 06:14:14.714267] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:49.320 [2024-10-01 06:14:14.714352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.320 [2024-10-01 06:14:14.714373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:49.320 [2024-10-01 06:14:14.718844] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:49.320 [2024-10-01 06:14:14.718916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.320 [2024-10-01 06:14:14.718937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:49.320 [2024-10-01 06:14:14.723231] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:49.320 [2024-10-01 06:14:14.723301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.320 [2024-10-01 06:14:14.723321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:49.320 [2024-10-01 06:14:14.727656] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:49.320 [2024-10-01 06:14:14.727729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.320 [2024-10-01 06:14:14.727749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:49.320 [2024-10-01 06:14:14.732207] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:49.320 [2024-10-01 06:14:14.732309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.320 [2024-10-01 06:14:14.732329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:49.320 [2024-10-01 06:14:14.736598] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:49.320 [2024-10-01 06:14:14.736670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.320 [2024-10-01 06:14:14.736690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:49.320 [2024-10-01 06:14:14.741082] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:49.320 [2024-10-01 06:14:14.741154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.320 [2024-10-01 06:14:14.741175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:49.320 [2024-10-01 06:14:14.745387] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:49.320 [2024-10-01 06:14:14.745465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.320 [2024-10-01 06:14:14.745486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:49.320 [2024-10-01 06:14:14.749806] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:49.320 [2024-10-01 06:14:14.749879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.320 [2024-10-01 06:14:14.749899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:49.320 [2024-10-01 06:14:14.754402] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:49.320 [2024-10-01 06:14:14.754482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.320 [2024-10-01 06:14:14.754503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:49.320 [2024-10-01 06:14:14.758934] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:49.320 [2024-10-01 06:14:14.759005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.320 [2024-10-01 06:14:14.759041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:49.320 [2024-10-01 06:14:14.763335] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:49.320 [2024-10-01 06:14:14.763405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.320 [2024-10-01 06:14:14.763426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:49.320 [2024-10-01 06:14:14.767747] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:49.320 [2024-10-01 06:14:14.767820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.320 [2024-10-01 06:14:14.767841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:49.320 [2024-10-01 06:14:14.772444] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:49.320 [2024-10-01 06:14:14.772517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.320 [2024-10-01 06:14:14.772538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:49.320 [2024-10-01 06:14:14.776875] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:49.320 [2024-10-01 06:14:14.776965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.320 [2024-10-01 06:14:14.776986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:49.320 [2024-10-01 06:14:14.781213] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:49.320 [2024-10-01 06:14:14.781285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.320 [2024-10-01 06:14:14.781306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:49.320 [2024-10-01 06:14:14.785583] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:49.320 [2024-10-01 06:14:14.785653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.320 [2024-10-01 06:14:14.785674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:49.320 [2024-10-01 06:14:14.790103] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:49.320 [2024-10-01 06:14:14.790173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.320 [2024-10-01 06:14:14.790195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:49.320 [2024-10-01 06:14:14.794465] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:49.320 [2024-10-01 06:14:14.794544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.320 [2024-10-01 06:14:14.794564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:49.320 [2024-10-01 06:14:14.799030] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:49.320 [2024-10-01 06:14:14.799101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.320 [2024-10-01 06:14:14.799123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:49.320 [2024-10-01 06:14:14.803408] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:49.320 [2024-10-01 06:14:14.803487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.320 [2024-10-01 06:14:14.803509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:49.320 [2024-10-01 06:14:14.807812] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:49.320 [2024-10-01 06:14:14.807883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.320 [2024-10-01 06:14:14.807957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:49.321 [2024-10-01 06:14:14.812366] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:49.321 [2024-10-01 06:14:14.812438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.321 [2024-10-01 06:14:14.812458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:49.321 [2024-10-01 06:14:14.816807] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:49.321 [2024-10-01 06:14:14.816878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.321 [2024-10-01 06:14:14.816898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:49.321 [2024-10-01 06:14:14.821370] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:49.321 [2024-10-01 06:14:14.821442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.321 [2024-10-01 06:14:14.821463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:49.321 [2024-10-01 06:14:14.825806] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:49.321 [2024-10-01 06:14:14.825875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.321 [2024-10-01 06:14:14.825896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:49.321 [2024-10-01 06:14:14.830357] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:49.321 [2024-10-01 06:14:14.830429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.321 [2024-10-01 06:14:14.830449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:49.321 [2024-10-01 06:14:14.834835] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:49.321 [2024-10-01 06:14:14.834907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.321 [2024-10-01 06:14:14.834942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:49.321 [2024-10-01 06:14:14.839241] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:49.321 [2024-10-01 06:14:14.839313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.321 [2024-10-01 06:14:14.839334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:49.321 [2024-10-01 06:14:14.843608] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:49.321 [2024-10-01 06:14:14.843680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.321 [2024-10-01 06:14:14.843701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:49.321 [2024-10-01 06:14:14.848115] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:49.321 [2024-10-01 06:14:14.848186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.321 [2024-10-01 06:14:14.848208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:49.321 [2024-10-01 06:14:14.852582] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:49.321 [2024-10-01 06:14:14.852659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.321 [2024-10-01 06:14:14.852680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:49.321 [2024-10-01 06:14:14.857149] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:49.321 [2024-10-01 06:14:14.857216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.321 [2024-10-01 06:14:14.857237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:49.321 [2024-10-01 06:14:14.861527] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:49.321 [2024-10-01 06:14:14.861605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.321 [2024-10-01 06:14:14.861626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:49.321 [2024-10-01 06:14:14.865959] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:49.321 [2024-10-01 06:14:14.866033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.321 [2024-10-01 06:14:14.866054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:49.321 [2024-10-01 06:14:14.870447] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:49.321 [2024-10-01 06:14:14.870518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.321 [2024-10-01 06:14:14.870540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:49.321 [2024-10-01 06:14:14.874979] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:49.321 [2024-10-01 06:14:14.875049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.321 [2024-10-01 06:14:14.875071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:49.321 [2024-10-01 06:14:14.879398] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:49.321 [2024-10-01 06:14:14.879467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.321 [2024-10-01 06:14:14.879487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:49.321 [2024-10-01 06:14:14.883825] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:49.321 [2024-10-01 06:14:14.883948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.321 [2024-10-01 06:14:14.883970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:49.321 [2024-10-01 06:14:14.888434] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:49.321 [2024-10-01 06:14:14.888502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.321 [2024-10-01 06:14:14.888524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:49.321 [2024-10-01 06:14:14.892881] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:49.321 [2024-10-01 06:14:14.892975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.321 [2024-10-01 06:14:14.892996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:49.321 [2024-10-01 06:14:14.897321] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:49.321 [2024-10-01 06:14:14.897382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.321 [2024-10-01 06:14:14.897403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:49.321 [2024-10-01 06:14:14.902382] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:49.321 [2024-10-01 06:14:14.902452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.321 [2024-10-01 06:14:14.902474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:49.321 [2024-10-01 06:14:14.907143] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:49.321 [2024-10-01 06:14:14.907215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.321 [2024-10-01 06:14:14.907251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:49.321 [2024-10-01 06:14:14.912119] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:49.321 [2024-10-01 06:14:14.912188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.321 [2024-10-01 06:14:14.912212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:49.321 [2024-10-01 06:14:14.917181] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:49.321 [2024-10-01 06:14:14.917269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.321 [2024-10-01 06:14:14.917291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:49.321 [2024-10-01 06:14:14.922406] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:49.321 [2024-10-01 06:14:14.922477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.321 [2024-10-01 06:14:14.922498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:49.321 [2024-10-01 06:14:14.927403] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:49.321 [2024-10-01 06:14:14.927478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.321 [2024-10-01 06:14:14.927499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:49.321 [2024-10-01 06:14:14.932634] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:49.321 [2024-10-01 06:14:14.932723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.322 [2024-10-01 06:14:14.932746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:49.582 [2024-10-01 06:14:14.937938] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:49.582 [2024-10-01 06:14:14.938018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.582 [2024-10-01 06:14:14.938039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:49.582 [2024-10-01 06:14:14.942954] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:49.582 [2024-10-01 06:14:14.943027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.582 [2024-10-01 06:14:14.943048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:49.582 [2024-10-01 06:14:14.947500] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:49.582 [2024-10-01 06:14:14.947585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.582 [2024-10-01 06:14:14.947606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:49.582 [2024-10-01 06:14:14.952209] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:49.582 [2024-10-01 06:14:14.952344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.582 [2024-10-01 06:14:14.952365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:49.582 [2024-10-01 06:14:14.957102] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:49.582 [2024-10-01 06:14:14.957187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.582 [2024-10-01 06:14:14.957209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:49.582 [2024-10-01 06:14:14.961737] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:49.582 [2024-10-01 06:14:14.961819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.582 [2024-10-01 06:14:14.961840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:49.582 [2024-10-01 06:14:14.966347] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:49.582 [2024-10-01 06:14:14.966417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.582 [2024-10-01 06:14:14.966438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:49.582 [2024-10-01 06:14:14.971194] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:49.582 [2024-10-01 06:14:14.971278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.582 [2024-10-01 06:14:14.971299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:49.582 [2024-10-01 06:14:14.975774] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:49.582 [2024-10-01 06:14:14.975854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.582 [2024-10-01 06:14:14.975875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:49.582 [2024-10-01 06:14:14.980498] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:49.582 [2024-10-01 06:14:14.980580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.582 [2024-10-01 06:14:14.980601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:49.582 [2024-10-01 06:14:14.985252] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:49.582 [2024-10-01 06:14:14.985322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.582 [2024-10-01 06:14:14.985344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:49.582 [2024-10-01 06:14:14.989872] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:49.582 [2024-10-01 06:14:14.989956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.582 [2024-10-01 06:14:14.989977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:49.582 [2024-10-01 06:14:14.994855] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:49.582 [2024-10-01 06:14:14.994966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.582 [2024-10-01 06:14:14.994988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:49.582 [2024-10-01 06:14:14.999879] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:49.582 [2024-10-01 06:14:15.000025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.582 [2024-10-01 06:14:15.000050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:49.582 [2024-10-01 06:14:15.005152] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:49.582 [2024-10-01 06:14:15.005228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.582 [2024-10-01 06:14:15.005251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:49.582 [2024-10-01 06:14:15.010511] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:49.582 [2024-10-01 06:14:15.010576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.582 [2024-10-01 06:14:15.010599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:49.582 [2024-10-01 06:14:15.015993] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:49.582 [2024-10-01 06:14:15.016065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.582 [2024-10-01 06:14:15.016093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:49.582 [2024-10-01 06:14:15.021104] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:49.582 [2024-10-01 06:14:15.021188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.582 [2024-10-01 06:14:15.021213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:49.582 [2024-10-01 06:14:15.026180] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:49.582 [2024-10-01 06:14:15.026262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.582 [2024-10-01 06:14:15.026329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:49.582 [2024-10-01 06:14:15.031215] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:49.582 [2024-10-01 06:14:15.031331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.582 [2024-10-01 06:14:15.031353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:49.582 [2024-10-01 06:14:15.035988] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:49.582 [2024-10-01 06:14:15.036066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.582 [2024-10-01 06:14:15.036090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:49.582 [2024-10-01 06:14:15.040585] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:49.582 [2024-10-01 06:14:15.040657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.582 [2024-10-01 06:14:15.040679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:49.582 [2024-10-01 06:14:15.045328] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:49.582 [2024-10-01 06:14:15.045403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.582 [2024-10-01 06:14:15.045425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:49.582 [2024-10-01 06:14:15.050091] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:49.582 [2024-10-01 06:14:15.050163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.582 [2024-10-01 06:14:15.050185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:49.582 [2024-10-01 06:14:15.054706] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:49.582 [2024-10-01 06:14:15.054776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.582 [2024-10-01 06:14:15.054797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:49.582 [2024-10-01 06:14:15.059345] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:49.582 [2024-10-01 06:14:15.059432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.583 [2024-10-01 06:14:15.059453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:49.583 [2024-10-01 06:14:15.064190] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:49.583 [2024-10-01 06:14:15.064263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.583 [2024-10-01 06:14:15.064300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:49.583 [2024-10-01 06:14:15.068850] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:49.583 [2024-10-01 06:14:15.068931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.583 [2024-10-01 06:14:15.068952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:49.583 [2024-10-01 06:14:15.073528] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:49.583 [2024-10-01 06:14:15.073603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.583 [2024-10-01 06:14:15.073624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:49.583 [2024-10-01 06:14:15.078509] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:49.583 [2024-10-01 06:14:15.078580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.583 [2024-10-01 06:14:15.078602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:49.583 [2024-10-01 06:14:15.083139] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:49.583 [2024-10-01 06:14:15.083210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.583 [2024-10-01 06:14:15.083231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:49.583 [2024-10-01 06:14:15.087633] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:49.583 [2024-10-01 06:14:15.087704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.583 [2024-10-01 06:14:15.087725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:49.583 [2024-10-01 06:14:15.092411] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:49.583 [2024-10-01 06:14:15.092483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.583 [2024-10-01 06:14:15.092504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:49.583 [2024-10-01 06:14:15.096845] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:49.583 [2024-10-01 06:14:15.096917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.583 [2024-10-01 06:14:15.096938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:49.583 [2024-10-01 06:14:15.101272] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:49.583 [2024-10-01 06:14:15.101344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.583 [2024-10-01 06:14:15.101365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:49.583 [2024-10-01 06:14:15.105732] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:49.583 [2024-10-01 06:14:15.105802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.583 [2024-10-01 06:14:15.105822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:49.583 [2024-10-01 06:14:15.110235] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:49.583 [2024-10-01 06:14:15.110305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.583 [2024-10-01 06:14:15.110325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:49.583 [2024-10-01 06:14:15.114634] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:49.583 [2024-10-01 06:14:15.114706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.583 [2024-10-01 06:14:15.114726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:49.583 [2024-10-01 06:14:15.119238] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:49.583 [2024-10-01 06:14:15.119313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.583 [2024-10-01 06:14:15.119334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:49.583 [2024-10-01 06:14:15.123672] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:49.583 [2024-10-01 06:14:15.123743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.583 [2024-10-01 06:14:15.123764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:49.583 [2024-10-01 06:14:15.128146] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:49.583 [2024-10-01 06:14:15.128217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.583 [2024-10-01 06:14:15.128238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:49.583 [2024-10-01 06:14:15.132559] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:49.583 [2024-10-01 06:14:15.132630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.583 [2024-10-01 06:14:15.132651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:49.583 [2024-10-01 06:14:15.136943] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:49.583 [2024-10-01 06:14:15.137022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.583 [2024-10-01 06:14:15.137043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:49.583 [2024-10-01 06:14:15.141407] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:49.583 [2024-10-01 06:14:15.141508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.583 [2024-10-01 06:14:15.141529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:49.583 [2024-10-01 06:14:15.145832] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:49.583 [2024-10-01 06:14:15.145924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.583 [2024-10-01 06:14:15.145945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:49.583 [2024-10-01 06:14:15.150206] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:49.583 [2024-10-01 06:14:15.150275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.583 [2024-10-01 06:14:15.150296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:49.583 [2024-10-01 06:14:15.154716] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:49.583 [2024-10-01 06:14:15.154800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.583 [2024-10-01 06:14:15.154821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:49.583 [2024-10-01 06:14:15.159202] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:49.583 [2024-10-01 06:14:15.159274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.583 [2024-10-01 06:14:15.159294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:49.583 [2024-10-01 06:14:15.163601] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:49.583 [2024-10-01 06:14:15.163673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.583 [2024-10-01 06:14:15.163693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:49.583 [2024-10-01 06:14:15.168076] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:49.583 [2024-10-01 06:14:15.168150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.583 [2024-10-01 06:14:15.168173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:49.583 [2024-10-01 06:14:15.172487] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:49.583 [2024-10-01 06:14:15.172566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.583 [2024-10-01 06:14:15.172587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:49.583 [2024-10-01 06:14:15.177124] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:49.583 [2024-10-01 06:14:15.177197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.583 [2024-10-01 06:14:15.177218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:49.584 [2024-10-01 06:14:15.181532] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:49.584 [2024-10-01 06:14:15.181621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.584 [2024-10-01 06:14:15.181643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:49.584 [2024-10-01 06:14:15.186131] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:49.584 [2024-10-01 06:14:15.186202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.584 [2024-10-01 06:14:15.186223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:49.584 [2024-10-01 06:14:15.190539] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:49.584 [2024-10-01 06:14:15.190609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.584 [2024-10-01 06:14:15.190629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:49.584 [2024-10-01 06:14:15.195472] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:49.584 [2024-10-01 06:14:15.195562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.584 [2024-10-01 06:14:15.195583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:49.843 [2024-10-01 06:14:15.200212] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:49.843 [2024-10-01 06:14:15.200315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.843 [2024-10-01 06:14:15.200352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:49.843 [2024-10-01 06:14:15.205084] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:49.843 [2024-10-01 06:14:15.205156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.844 [2024-10-01 06:14:15.205177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:49.844 [2024-10-01 06:14:15.209522] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:49.844 [2024-10-01 06:14:15.209606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.844 [2024-10-01 06:14:15.209626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:49.844 [2024-10-01 06:14:15.214075] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:49.844 [2024-10-01 06:14:15.214146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.844 [2024-10-01 06:14:15.214167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:49.844 [2024-10-01 06:14:15.218465] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:49.844 [2024-10-01 06:14:15.218544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.844 [2024-10-01 06:14:15.218564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:49.844 [2024-10-01 06:14:15.223016] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:49.844 [2024-10-01 06:14:15.223088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.844 [2024-10-01 06:14:15.223108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:49.844 [2024-10-01 06:14:15.227405] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:49.844 [2024-10-01 06:14:15.227477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.844 [2024-10-01 06:14:15.227498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:49.844 [2024-10-01 06:14:15.232002] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:49.844 [2024-10-01 06:14:15.232080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.844 [2024-10-01 06:14:15.232101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:49.844 [2024-10-01 06:14:15.236419] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:49.844 [2024-10-01 06:14:15.236489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.844 [2024-10-01 06:14:15.236510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:49.844 [2024-10-01 06:14:15.240834] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:49.844 [2024-10-01 06:14:15.240906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.844 [2024-10-01 06:14:15.240927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:49.844 [2024-10-01 06:14:15.245258] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:49.844 [2024-10-01 06:14:15.245330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.844 [2024-10-01 06:14:15.245350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:49.844 [2024-10-01 06:14:15.249634] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:49.844 [2024-10-01 06:14:15.249706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.844 [2024-10-01 06:14:15.249726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:49.844 [2024-10-01 06:14:15.254193] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:49.844 [2024-10-01 06:14:15.254264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.844 [2024-10-01 06:14:15.254284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:49.844 [2024-10-01 06:14:15.258552] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:49.844 [2024-10-01 06:14:15.258625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.844 [2024-10-01 06:14:15.258646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:49.844 [2024-10-01 06:14:15.263039] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:49.844 [2024-10-01 06:14:15.263109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.844 [2024-10-01 06:14:15.263129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:49.844 [2024-10-01 06:14:15.267348] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:49.844 [2024-10-01 06:14:15.267429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.844 [2024-10-01 06:14:15.267449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:49.844 [2024-10-01 06:14:15.271751] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:49.844 [2024-10-01 06:14:15.271823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.844 [2024-10-01 06:14:15.271843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:49.844 [2024-10-01 06:14:15.276291] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:49.844 [2024-10-01 06:14:15.276380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.844 [2024-10-01 06:14:15.276400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:49.844 [2024-10-01 06:14:15.280788] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:49.844 [2024-10-01 06:14:15.280860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.844 [2024-10-01 06:14:15.280881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:49.844 [2024-10-01 06:14:15.285263] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:49.844 [2024-10-01 06:14:15.285334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.844 [2024-10-01 06:14:15.285355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:49.844 [2024-10-01 06:14:15.289595] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:49.844 [2024-10-01 06:14:15.289678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.844 [2024-10-01 06:14:15.289699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:49.844 [2024-10-01 06:14:15.294083] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:49.844 [2024-10-01 06:14:15.294155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.844 [2024-10-01 06:14:15.294176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:49.844 [2024-10-01 06:14:15.298485] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:49.844 [2024-10-01 06:14:15.298557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.844 [2024-10-01 06:14:15.298579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:49.844 [2024-10-01 06:14:15.302864] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:49.844 [2024-10-01 06:14:15.302943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.844 [2024-10-01 06:14:15.302964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:49.844 [2024-10-01 06:14:15.307206] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:49.844 [2024-10-01 06:14:15.307279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.844 [2024-10-01 06:14:15.307299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:49.844 [2024-10-01 06:14:15.311504] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:49.844 [2024-10-01 06:14:15.311583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.844 [2024-10-01 06:14:15.311603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:49.844 [2024-10-01 06:14:15.315950] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:49.844 [2024-10-01 06:14:15.316011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.844 [2024-10-01 06:14:15.316032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:49.844 [2024-10-01 06:14:15.320387] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:49.844 [2024-10-01 06:14:15.320454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.844 [2024-10-01 06:14:15.320474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:49.844 [2024-10-01 06:14:15.324877] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:49.844 [2024-10-01 06:14:15.324976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.844 [2024-10-01 06:14:15.324997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:49.844 [2024-10-01 06:14:15.329381] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:49.844 [2024-10-01 06:14:15.329453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.844 [2024-10-01 06:14:15.329474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:49.844 [2024-10-01 06:14:15.333900] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:49.844 [2024-10-01 06:14:15.333994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.844 [2024-10-01 06:14:15.334030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:49.844 [2024-10-01 06:14:15.338377] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:49.844 [2024-10-01 06:14:15.338455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.844 [2024-10-01 06:14:15.338476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:49.844 [2024-10-01 06:14:15.342851] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:49.844 [2024-10-01 06:14:15.342935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.844 [2024-10-01 06:14:15.342956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:49.844 [2024-10-01 06:14:15.347207] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:49.844 [2024-10-01 06:14:15.347278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.844 [2024-10-01 06:14:15.347299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:49.844 [2024-10-01 06:14:15.351513] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:49.844 [2024-10-01 06:14:15.351596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.844 [2024-10-01 06:14:15.351616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:49.844 [2024-10-01 06:14:15.356150] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:49.844 [2024-10-01 06:14:15.356224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.844 [2024-10-01 06:14:15.356260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:49.844 [2024-10-01 06:14:15.360697] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:49.844 [2024-10-01 06:14:15.360767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.844 [2024-10-01 06:14:15.360787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:49.845 [2024-10-01 06:14:15.365180] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:49.845 [2024-10-01 06:14:15.365250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.845 [2024-10-01 06:14:15.365271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:49.845 [2024-10-01 06:14:15.369601] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:49.845 [2024-10-01 06:14:15.369673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.845 [2024-10-01 06:14:15.369694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:49.845 [2024-10-01 06:14:15.374182] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:49.845 [2024-10-01 06:14:15.374250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.845 [2024-10-01 06:14:15.374271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:49.845 [2024-10-01 06:14:15.378597] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:49.845 [2024-10-01 06:14:15.378668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.845 [2024-10-01 06:14:15.378688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:49.845 [2024-10-01 06:14:15.383099] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:49.845 [2024-10-01 06:14:15.383169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.845 [2024-10-01 06:14:15.383189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:49.845 [2024-10-01 06:14:15.387397] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:49.845 [2024-10-01 06:14:15.387476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.845 [2024-10-01 06:14:15.387497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:49.845 [2024-10-01 06:14:15.391849] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:49.845 [2024-10-01 06:14:15.391975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.845 [2024-10-01 06:14:15.391997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:49.845 [2024-10-01 06:14:15.396294] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:49.845 [2024-10-01 06:14:15.396365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.845 [2024-10-01 06:14:15.396386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:49.845 [2024-10-01 06:14:15.400704] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:49.845 [2024-10-01 06:14:15.400777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.845 [2024-10-01 06:14:15.400798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:49.845 [2024-10-01 06:14:15.405208] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:49.845 [2024-10-01 06:14:15.405279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.845 [2024-10-01 06:14:15.405316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:49.845 [2024-10-01 06:14:15.409624] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:49.845 [2024-10-01 06:14:15.409694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.845 [2024-10-01 06:14:15.409715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:49.845 [2024-10-01 06:14:15.414162] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:49.845 [2024-10-01 06:14:15.414236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.845 [2024-10-01 06:14:15.414257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:49.845 [2024-10-01 06:14:15.418578] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:49.845 [2024-10-01 06:14:15.418649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.845 [2024-10-01 06:14:15.418670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:49.845 [2024-10-01 06:14:15.423124] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:49.845 [2024-10-01 06:14:15.423196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.845 [2024-10-01 06:14:15.423216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:49.845 [2024-10-01 06:14:15.427489] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:49.845 [2024-10-01 06:14:15.427558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.845 [2024-10-01 06:14:15.427579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:49.845 [2024-10-01 06:14:15.431890] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:49.845 [2024-10-01 06:14:15.432021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.845 [2024-10-01 06:14:15.432042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:49.845 [2024-10-01 06:14:15.436374] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:49.845 [2024-10-01 06:14:15.436453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.845 [2024-10-01 06:14:15.436473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:49.845 [2024-10-01 06:14:15.440763] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:49.845 [2024-10-01 06:14:15.440833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.845 [2024-10-01 06:14:15.440853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:49.845 [2024-10-01 06:14:15.445213] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:49.845 [2024-10-01 06:14:15.445285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.845 [2024-10-01 06:14:15.445306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:49.845 [2024-10-01 06:14:15.449533] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:49.845 [2024-10-01 06:14:15.449600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.845 [2024-10-01 06:14:15.449621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:49.845 [2024-10-01 06:14:15.454200] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:49.845 [2024-10-01 06:14:15.454276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:49.845 [2024-10-01 06:14:15.454299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:50.105 [2024-10-01 06:14:15.459190] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:50.105 [2024-10-01 06:14:15.459278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.105 [2024-10-01 06:14:15.459300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:50.105 [2024-10-01 06:14:15.464113] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:50.105 [2024-10-01 06:14:15.464187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.105 [2024-10-01 06:14:15.464209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:50.105 [2024-10-01 06:14:15.468621] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:50.105 [2024-10-01 06:14:15.468694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.105 [2024-10-01 06:14:15.468714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:50.105 [2024-10-01 06:14:15.473190] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:50.105 [2024-10-01 06:14:15.473269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.105 [2024-10-01 06:14:15.473289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:50.105 [2024-10-01 06:14:15.477675] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:50.105 [2024-10-01 06:14:15.477757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.105 [2024-10-01 06:14:15.477777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:50.105 [2024-10-01 06:14:15.482201] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:50.105 [2024-10-01 06:14:15.482275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.105 [2024-10-01 06:14:15.482312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:50.105 [2024-10-01 06:14:15.486748] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:50.105 [2024-10-01 06:14:15.486821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.105 [2024-10-01 06:14:15.486841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:50.105 [2024-10-01 06:14:15.491266] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:50.105 [2024-10-01 06:14:15.491338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.105 [2024-10-01 06:14:15.491359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:50.105 [2024-10-01 06:14:15.495644] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:50.105 [2024-10-01 06:14:15.495716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.105 [2024-10-01 06:14:15.495738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:50.105 [2024-10-01 06:14:15.500134] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:50.105 [2024-10-01 06:14:15.500192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.105 [2024-10-01 06:14:15.500228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:50.105 [2024-10-01 06:14:15.504490] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:50.105 [2024-10-01 06:14:15.504569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.105 [2024-10-01 06:14:15.504590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:50.105 [2024-10-01 06:14:15.508923] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:50.105 [2024-10-01 06:14:15.509001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.105 [2024-10-01 06:14:15.509022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:50.105 [2024-10-01 06:14:15.513326] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:50.105 [2024-10-01 06:14:15.513407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.105 [2024-10-01 06:14:15.513427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:50.105 [2024-10-01 06:14:15.517780] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:50.105 [2024-10-01 06:14:15.517852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.105 [2024-10-01 06:14:15.517873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:50.105 [2024-10-01 06:14:15.522414] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:50.105 [2024-10-01 06:14:15.522484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.105 [2024-10-01 06:14:15.522520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:50.105 [2024-10-01 06:14:15.526911] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:50.105 [2024-10-01 06:14:15.526992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.105 [2024-10-01 06:14:15.527013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:50.105 [2024-10-01 06:14:15.531334] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:50.105 [2024-10-01 06:14:15.531406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.105 [2024-10-01 06:14:15.531426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:50.105 [2024-10-01 06:14:15.535983] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:50.105 [2024-10-01 06:14:15.536043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.105 [2024-10-01 06:14:15.536065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:50.105 [2024-10-01 06:14:15.540623] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:50.105 [2024-10-01 06:14:15.540693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.105 [2024-10-01 06:14:15.540714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:50.105 [2024-10-01 06:14:15.545345] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:50.105 [2024-10-01 06:14:15.545426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.105 [2024-10-01 06:14:15.545448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:50.105 [2024-10-01 06:14:15.550068] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:50.105 [2024-10-01 06:14:15.550144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.105 [2024-10-01 06:14:15.550165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:50.105 [2024-10-01 06:14:15.554734] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:50.105 [2024-10-01 06:14:15.554805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.105 [2024-10-01 06:14:15.554826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:21:50.105 [2024-10-01 06:14:15.559399] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:50.106 [2024-10-01 06:14:15.559479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.106 [2024-10-01 06:14:15.559500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:21:50.106 [2024-10-01 06:14:15.564174] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1c5e550) with pdu=0x2000198fef90 00:21:50.106 [2024-10-01 06:14:15.564305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.106 [2024-10-01 06:14:15.564325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:21:50.106 6759.00 IOPS, 844.88 MiB/s 00:21:50.106 Latency(us) 00:21:50.106 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:50.106 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:21:50.106 nvme0n1 : 2.00 6754.79 844.35 0.00 0.00 2363.54 1697.98 9770.82 00:21:50.106 =================================================================================================================== 00:21:50.106 Total : 6754.79 844.35 0.00 0.00 2363.54 1697.98 9770.82 00:21:50.106 { 00:21:50.106 "results": [ 00:21:50.106 { 00:21:50.106 "job": "nvme0n1", 00:21:50.106 "core_mask": "0x2", 00:21:50.106 "workload": "randwrite", 00:21:50.106 "status": "finished", 00:21:50.106 "queue_depth": 16, 00:21:50.106 "io_size": 131072, 00:21:50.106 "runtime": 2.003616, 00:21:50.106 "iops": 6754.7873444811785, 00:21:50.106 "mibps": 844.3484180601473, 00:21:50.106 "io_failed": 0, 00:21:50.106 "io_timeout": 0, 00:21:50.106 "avg_latency_us": 2363.5425522253718, 00:21:50.106 "min_latency_us": 1697.9781818181818, 00:21:50.106 "max_latency_us": 9770.821818181817 00:21:50.106 } 00:21:50.106 ], 00:21:50.106 "core_count": 1 00:21:50.106 } 00:21:50.106 06:14:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:21:50.106 06:14:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:21:50.106 | .driver_specific 00:21:50.106 | .nvme_error 00:21:50.106 | .status_code 00:21:50.106 | .command_transient_transport_error' 00:21:50.106 06:14:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:21:50.106 06:14:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:21:50.364 06:14:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 436 > 0 )) 00:21:50.364 06:14:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 94673 00:21:50.364 06:14:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 94673 ']' 00:21:50.364 06:14:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 94673 00:21:50.364 06:14:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:21:50.364 06:14:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:50.364 06:14:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 94673 00:21:50.364 06:14:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:21:50.364 06:14:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:21:50.364 killing process with pid 94673 00:21:50.364 06:14:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 94673' 00:21:50.364 06:14:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 94673 00:21:50.364 Received shutdown signal, test time was about 2.000000 seconds 00:21:50.364 00:21:50.364 Latency(us) 00:21:50.364 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:50.364 =================================================================================================================== 00:21:50.364 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:50.364 06:14:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 94673 00:21:50.623 06:14:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 94482 00:21:50.623 06:14:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 94482 ']' 00:21:50.623 06:14:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 94482 00:21:50.623 06:14:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:21:50.623 06:14:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:50.623 06:14:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 94482 00:21:50.623 06:14:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:50.623 06:14:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:50.623 killing process with pid 94482 00:21:50.623 06:14:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 94482' 00:21:50.623 06:14:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 94482 00:21:50.623 06:14:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 94482 00:21:50.623 00:21:50.623 real 0m15.880s 00:21:50.623 user 0m30.337s 00:21:50.623 sys 0m4.435s 00:21:50.623 06:14:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:50.623 06:14:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:21:50.623 ************************************ 00:21:50.623 END TEST nvmf_digest_error 00:21:50.623 ************************************ 00:21:50.623 06:14:16 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:21:50.623 06:14:16 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:21:50.623 06:14:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@512 -- # nvmfcleanup 00:21:50.623 06:14:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:21:50.881 06:14:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:50.881 06:14:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:21:50.881 06:14:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:50.881 06:14:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:50.881 rmmod nvme_tcp 00:21:50.881 rmmod nvme_fabrics 00:21:50.881 rmmod nvme_keyring 00:21:50.881 06:14:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:50.882 06:14:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:21:50.882 06:14:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:21:50.882 06:14:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@513 -- # '[' -n 94482 ']' 00:21:50.882 06:14:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@514 -- # killprocess 94482 00:21:50.882 06:14:16 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@950 -- # '[' -z 94482 ']' 00:21:50.882 06:14:16 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # kill -0 94482 00:21:50.882 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (94482) - No such process 00:21:50.882 Process with pid 94482 is not found 00:21:50.882 06:14:16 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@977 -- # echo 'Process with pid 94482 is not found' 00:21:50.882 06:14:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:21:50.882 06:14:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:21:50.882 06:14:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:21:50.882 06:14:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:21:50.882 06:14:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@787 -- # iptables-save 00:21:50.882 06:14:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:21:50.882 06:14:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@787 -- # iptables-restore 00:21:50.882 06:14:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:50.882 06:14:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:21:50.882 06:14:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:21:50.882 06:14:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:21:50.882 06:14:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:21:50.882 06:14:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:21:50.882 06:14:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:21:50.882 06:14:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:21:50.882 06:14:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:21:50.882 06:14:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:21:50.882 06:14:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:21:50.882 06:14:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:21:50.882 06:14:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:21:50.882 06:14:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:51.140 06:14:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:51.140 06:14:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@246 -- # remove_spdk_ns 00:21:51.140 06:14:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:51.140 06:14:16 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:51.140 06:14:16 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:51.140 06:14:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@300 -- # return 0 00:21:51.141 00:21:51.141 real 0m32.305s 00:21:51.141 user 1m0.554s 00:21:51.141 sys 0m9.309s 00:21:51.141 06:14:16 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:51.141 06:14:16 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:21:51.141 ************************************ 00:21:51.141 END TEST nvmf_digest 00:21:51.141 ************************************ 00:21:51.141 06:14:16 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:21:51.141 06:14:16 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 1 -eq 1 ]] 00:21:51.141 06:14:16 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@42 -- # run_test nvmf_host_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:21:51.141 06:14:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:21:51.141 06:14:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:51.141 06:14:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:51.141 ************************************ 00:21:51.141 START TEST nvmf_host_multipath 00:21:51.141 ************************************ 00:21:51.141 06:14:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:21:51.141 * Looking for test storage... 00:21:51.141 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:21:51.141 06:14:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:21:51.141 06:14:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:21:51.141 06:14:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1681 -- # lcov --version 00:21:51.401 06:14:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:21:51.401 06:14:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:51.401 06:14:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:51.401 06:14:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:51.401 06:14:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:21:51.401 06:14:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:21:51.401 06:14:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:21:51.401 06:14:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:21:51.401 06:14:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:21:51.401 06:14:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:21:51.401 06:14:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:21:51.401 06:14:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:51.401 06:14:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@344 -- # case "$op" in 00:21:51.401 06:14:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@345 -- # : 1 00:21:51.401 06:14:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:51.401 06:14:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:51.401 06:14:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@365 -- # decimal 1 00:21:51.401 06:14:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@353 -- # local d=1 00:21:51.401 06:14:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:51.401 06:14:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@355 -- # echo 1 00:21:51.401 06:14:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:21:51.401 06:14:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@366 -- # decimal 2 00:21:51.401 06:14:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@353 -- # local d=2 00:21:51.401 06:14:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:51.401 06:14:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@355 -- # echo 2 00:21:51.401 06:14:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:21:51.401 06:14:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:51.401 06:14:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:51.401 06:14:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@368 -- # return 0 00:21:51.401 06:14:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:51.401 06:14:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:21:51.401 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:51.401 --rc genhtml_branch_coverage=1 00:21:51.401 --rc genhtml_function_coverage=1 00:21:51.401 --rc genhtml_legend=1 00:21:51.401 --rc geninfo_all_blocks=1 00:21:51.401 --rc geninfo_unexecuted_blocks=1 00:21:51.401 00:21:51.401 ' 00:21:51.401 06:14:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:21:51.401 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:51.401 --rc genhtml_branch_coverage=1 00:21:51.401 --rc genhtml_function_coverage=1 00:21:51.401 --rc genhtml_legend=1 00:21:51.401 --rc geninfo_all_blocks=1 00:21:51.401 --rc geninfo_unexecuted_blocks=1 00:21:51.401 00:21:51.401 ' 00:21:51.401 06:14:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:21:51.401 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:51.401 --rc genhtml_branch_coverage=1 00:21:51.401 --rc genhtml_function_coverage=1 00:21:51.401 --rc genhtml_legend=1 00:21:51.401 --rc geninfo_all_blocks=1 00:21:51.401 --rc geninfo_unexecuted_blocks=1 00:21:51.401 00:21:51.401 ' 00:21:51.401 06:14:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:21:51.401 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:51.401 --rc genhtml_branch_coverage=1 00:21:51.401 --rc genhtml_function_coverage=1 00:21:51.401 --rc genhtml_legend=1 00:21:51.401 --rc geninfo_all_blocks=1 00:21:51.401 --rc geninfo_unexecuted_blocks=1 00:21:51.401 00:21:51.401 ' 00:21:51.401 06:14:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:51.401 06:14:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@7 -- # uname -s 00:21:51.401 06:14:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:51.401 06:14:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:51.401 06:14:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:51.401 06:14:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:51.401 06:14:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:51.401 06:14:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:51.401 06:14:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:51.401 06:14:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:51.401 06:14:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:51.401 06:14:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:51.401 06:14:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 00:21:51.401 06:14:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=a979a798-a221-4879-b3c4-5aaa753fde06 00:21:51.401 06:14:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:51.401 06:14:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:51.401 06:14:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:51.401 06:14:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:51.401 06:14:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:51.401 06:14:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:21:51.401 06:14:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:51.401 06:14:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:51.401 06:14:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:51.401 06:14:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:51.401 06:14:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:51.401 06:14:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:51.401 06:14:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@5 -- # export PATH 00:21:51.401 06:14:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:51.401 06:14:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@51 -- # : 0 00:21:51.401 06:14:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:51.401 06:14:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:51.401 06:14:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:51.401 06:14:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:51.401 06:14:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:51.401 06:14:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:51.401 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:51.401 06:14:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:51.401 06:14:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:51.401 06:14:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:51.401 06:14:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:51.401 06:14:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:51.401 06:14:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:51.401 06:14:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:21:51.401 06:14:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:51.401 06:14:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:21:51.401 06:14:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@30 -- # nvmftestinit 00:21:51.401 06:14:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:21:51.401 06:14:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:51.402 06:14:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@472 -- # prepare_net_devs 00:21:51.402 06:14:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@434 -- # local -g is_hw=no 00:21:51.402 06:14:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@436 -- # remove_spdk_ns 00:21:51.402 06:14:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:51.402 06:14:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:51.402 06:14:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:51.402 06:14:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:21:51.402 06:14:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:21:51.402 06:14:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:21:51.402 06:14:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:21:51.402 06:14:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:21:51.402 06:14:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@456 -- # nvmf_veth_init 00:21:51.402 06:14:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:51.402 06:14:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:21:51.402 06:14:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:21:51.402 06:14:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:21:51.402 06:14:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:51.402 06:14:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:21:51.402 06:14:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:51.402 06:14:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:21:51.402 06:14:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:51.402 06:14:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:21:51.402 06:14:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:51.402 06:14:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:51.402 06:14:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:51.402 06:14:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:51.402 06:14:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:51.402 06:14:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:51.402 06:14:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:21:51.402 Cannot find device "nvmf_init_br" 00:21:51.402 06:14:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@162 -- # true 00:21:51.402 06:14:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:21:51.402 Cannot find device "nvmf_init_br2" 00:21:51.402 06:14:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@163 -- # true 00:21:51.402 06:14:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:21:51.402 Cannot find device "nvmf_tgt_br" 00:21:51.402 06:14:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@164 -- # true 00:21:51.402 06:14:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:21:51.402 Cannot find device "nvmf_tgt_br2" 00:21:51.402 06:14:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@165 -- # true 00:21:51.402 06:14:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:21:51.402 Cannot find device "nvmf_init_br" 00:21:51.402 06:14:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@166 -- # true 00:21:51.402 06:14:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:21:51.402 Cannot find device "nvmf_init_br2" 00:21:51.402 06:14:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@167 -- # true 00:21:51.402 06:14:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:21:51.402 Cannot find device "nvmf_tgt_br" 00:21:51.402 06:14:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@168 -- # true 00:21:51.402 06:14:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:21:51.402 Cannot find device "nvmf_tgt_br2" 00:21:51.402 06:14:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@169 -- # true 00:21:51.402 06:14:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:21:51.402 Cannot find device "nvmf_br" 00:21:51.402 06:14:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@170 -- # true 00:21:51.402 06:14:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:21:51.402 Cannot find device "nvmf_init_if" 00:21:51.402 06:14:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@171 -- # true 00:21:51.402 06:14:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:21:51.402 Cannot find device "nvmf_init_if2" 00:21:51.402 06:14:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@172 -- # true 00:21:51.402 06:14:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:51.402 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:51.402 06:14:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@173 -- # true 00:21:51.402 06:14:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:51.402 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:51.402 06:14:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@174 -- # true 00:21:51.402 06:14:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:21:51.402 06:14:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:51.402 06:14:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:21:51.402 06:14:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:51.661 06:14:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:51.661 06:14:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:51.661 06:14:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:51.661 06:14:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:51.661 06:14:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:21:51.661 06:14:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:21:51.661 06:14:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:21:51.661 06:14:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:21:51.661 06:14:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:21:51.661 06:14:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:21:51.661 06:14:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:21:51.661 06:14:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:21:51.661 06:14:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:21:51.661 06:14:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:51.661 06:14:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:51.661 06:14:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:51.661 06:14:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:21:51.661 06:14:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:21:51.661 06:14:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:21:51.661 06:14:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:21:51.661 06:14:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:51.661 06:14:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:51.661 06:14:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:51.661 06:14:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:21:51.661 06:14:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:21:51.661 06:14:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:21:51.661 06:14:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:51.661 06:14:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:21:51.661 06:14:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:21:51.661 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:51.661 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.054 ms 00:21:51.661 00:21:51.661 --- 10.0.0.3 ping statistics --- 00:21:51.661 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:51.661 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:21:51.661 06:14:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:21:51.661 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:21:51.661 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.042 ms 00:21:51.661 00:21:51.661 --- 10.0.0.4 ping statistics --- 00:21:51.661 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:51.661 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:21:51.661 06:14:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:51.661 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:51.661 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:21:51.661 00:21:51.661 --- 10.0.0.1 ping statistics --- 00:21:51.661 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:51.661 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:21:51.661 06:14:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:21:51.661 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:51.661 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.046 ms 00:21:51.661 00:21:51.661 --- 10.0.0.2 ping statistics --- 00:21:51.661 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:51.661 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:21:51.661 06:14:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:51.661 06:14:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@457 -- # return 0 00:21:51.662 06:14:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:21:51.662 06:14:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:51.662 06:14:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:21:51.662 06:14:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:21:51.662 06:14:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:51.662 06:14:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:21:51.662 06:14:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:21:51.662 06:14:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@32 -- # nvmfappstart -m 0x3 00:21:51.662 06:14:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:21:51.662 06:14:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:51.662 06:14:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:21:51.662 06:14:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@505 -- # nvmfpid=94983 00:21:51.662 06:14:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:21:51.662 06:14:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@506 -- # waitforlisten 94983 00:21:51.662 06:14:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@831 -- # '[' -z 94983 ']' 00:21:51.662 06:14:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:51.662 06:14:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:51.662 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:51.662 06:14:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:51.662 06:14:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:51.662 06:14:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:21:51.921 [2024-10-01 06:14:17.283593] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:21:51.921 [2024-10-01 06:14:17.283697] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:51.921 [2024-10-01 06:14:17.416046] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:21:51.921 [2024-10-01 06:14:17.448642] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:51.921 [2024-10-01 06:14:17.448700] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:51.921 [2024-10-01 06:14:17.448708] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:51.921 [2024-10-01 06:14:17.448715] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:51.921 [2024-10-01 06:14:17.448721] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:51.921 [2024-10-01 06:14:17.448869] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:21:51.921 [2024-10-01 06:14:17.448878] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:21:51.921 [2024-10-01 06:14:17.475661] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:51.921 06:14:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:51.921 06:14:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@864 -- # return 0 00:21:51.921 06:14:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:21:51.921 06:14:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:51.921 06:14:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:21:52.180 06:14:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:52.180 06:14:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@33 -- # nvmfapp_pid=94983 00:21:52.180 06:14:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:21:52.439 [2024-10-01 06:14:17.846926] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:52.439 06:14:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:21:52.699 Malloc0 00:21:52.699 06:14:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:21:52.958 06:14:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:53.217 06:14:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:21:53.476 [2024-10-01 06:14:18.837466] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:21:53.477 06:14:18 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:21:53.477 [2024-10-01 06:14:19.061517] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:21:53.477 06:14:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:21:53.477 06:14:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@44 -- # bdevperf_pid=95025 00:21:53.477 06:14:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@46 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:53.477 06:14:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@47 -- # waitforlisten 95025 /var/tmp/bdevperf.sock 00:21:53.477 06:14:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@831 -- # '[' -z 95025 ']' 00:21:53.477 06:14:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:53.477 06:14:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:53.477 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:53.477 06:14:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:53.477 06:14:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:53.477 06:14:19 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:21:54.853 06:14:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:54.853 06:14:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@864 -- # return 0 00:21:54.853 06:14:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:21:54.853 06:14:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:21:55.111 Nvme0n1 00:21:55.111 06:14:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:21:55.370 Nvme0n1 00:21:55.370 06:14:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@78 -- # sleep 1 00:21:55.370 06:14:20 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:21:56.748 06:14:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@81 -- # set_ANA_state non_optimized optimized 00:21:56.748 06:14:21 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:21:56.748 06:14:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:21:57.009 06:14:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@83 -- # confirm_io_on_port optimized 4421 00:21:57.009 06:14:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 94983 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:21:57.009 06:14:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=95076 00:21:57.009 06:14:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:22:03.692 06:14:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:22:03.692 06:14:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:22:03.692 06:14:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:22:03.692 06:14:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:03.692 Attaching 4 probes... 00:22:03.692 @path[10.0.0.3, 4421]: 20000 00:22:03.692 @path[10.0.0.3, 4421]: 20439 00:22:03.692 @path[10.0.0.3, 4421]: 20476 00:22:03.692 @path[10.0.0.3, 4421]: 20741 00:22:03.692 @path[10.0.0.3, 4421]: 20631 00:22:03.692 06:14:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:22:03.692 06:14:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:22:03.693 06:14:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:22:03.693 06:14:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:22:03.693 06:14:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:22:03.693 06:14:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:22:03.693 06:14:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 95076 00:22:03.693 06:14:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:03.693 06:14:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@86 -- # set_ANA_state non_optimized inaccessible 00:22:03.693 06:14:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:22:03.693 06:14:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:22:03.951 06:14:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@87 -- # confirm_io_on_port non_optimized 4420 00:22:03.951 06:14:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=95187 00:22:03.951 06:14:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:22:03.951 06:14:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 94983 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:22:10.519 06:14:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:22:10.520 06:14:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:22:10.520 06:14:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:22:10.520 06:14:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:10.520 Attaching 4 probes... 00:22:10.520 @path[10.0.0.3, 4420]: 20161 00:22:10.520 @path[10.0.0.3, 4420]: 20618 00:22:10.520 @path[10.0.0.3, 4420]: 20560 00:22:10.520 @path[10.0.0.3, 4420]: 20603 00:22:10.520 @path[10.0.0.3, 4420]: 20582 00:22:10.520 06:14:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:22:10.520 06:14:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:22:10.520 06:14:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:22:10.520 06:14:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:22:10.520 06:14:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:22:10.520 06:14:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:22:10.520 06:14:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 95187 00:22:10.520 06:14:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:10.520 06:14:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@89 -- # set_ANA_state inaccessible optimized 00:22:10.520 06:14:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:22:10.520 06:14:35 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:22:10.779 06:14:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@90 -- # confirm_io_on_port optimized 4421 00:22:10.779 06:14:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=95303 00:22:10.779 06:14:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 94983 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:22:10.779 06:14:36 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:22:17.345 06:14:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:22:17.345 06:14:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:22:17.345 06:14:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:22:17.345 06:14:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:17.345 Attaching 4 probes... 00:22:17.345 @path[10.0.0.3, 4421]: 14853 00:22:17.345 @path[10.0.0.3, 4421]: 20153 00:22:17.345 @path[10.0.0.3, 4421]: 20174 00:22:17.345 @path[10.0.0.3, 4421]: 20184 00:22:17.345 @path[10.0.0.3, 4421]: 20584 00:22:17.345 06:14:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:22:17.345 06:14:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:22:17.345 06:14:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:22:17.345 06:14:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:22:17.345 06:14:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:22:17.345 06:14:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:22:17.345 06:14:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 95303 00:22:17.345 06:14:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:17.345 06:14:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@93 -- # set_ANA_state inaccessible inaccessible 00:22:17.345 06:14:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:22:17.345 06:14:42 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:22:17.604 06:14:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@94 -- # confirm_io_on_port '' '' 00:22:17.604 06:14:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=95421 00:22:17.604 06:14:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 94983 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:22:17.604 06:14:43 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:22:24.174 06:14:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:22:24.174 06:14:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="") | .address.trsvcid' 00:22:24.174 06:14:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port= 00:22:24.174 06:14:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:24.174 Attaching 4 probes... 00:22:24.174 00:22:24.174 00:22:24.174 00:22:24.174 00:22:24.174 00:22:24.174 06:14:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:22:24.174 06:14:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:22:24.174 06:14:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:22:24.174 06:14:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port= 00:22:24.174 06:14:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ '' == '' ]] 00:22:24.174 06:14:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ '' == '' ]] 00:22:24.174 06:14:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 95421 00:22:24.174 06:14:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:24.174 06:14:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@96 -- # set_ANA_state non_optimized optimized 00:22:24.174 06:14:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:22:24.174 06:14:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:22:24.434 06:14:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@97 -- # confirm_io_on_port optimized 4421 00:22:24.434 06:14:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=95534 00:22:24.434 06:14:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 94983 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:22:24.434 06:14:49 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:22:31.003 06:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:22:31.003 06:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:22:31.003 06:14:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:22:31.003 06:14:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:31.003 Attaching 4 probes... 00:22:31.003 @path[10.0.0.3, 4421]: 18954 00:22:31.003 @path[10.0.0.3, 4421]: 19265 00:22:31.003 @path[10.0.0.3, 4421]: 19006 00:22:31.003 @path[10.0.0.3, 4421]: 19560 00:22:31.003 @path[10.0.0.3, 4421]: 19665 00:22:31.003 06:14:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:22:31.003 06:14:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:22:31.003 06:14:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:22:31.003 06:14:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:22:31.003 06:14:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:22:31.003 06:14:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:22:31.003 06:14:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 95534 00:22:31.003 06:14:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:31.003 06:14:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:22:31.003 [2024-10-01 06:14:56.445394] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cc3300 is same with the state(6) to be set 00:22:31.003 06:14:56 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@101 -- # sleep 1 00:22:31.942 06:14:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@104 -- # confirm_io_on_port non_optimized 4420 00:22:31.942 06:14:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=95658 00:22:31.942 06:14:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:22:31.942 06:14:57 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 94983 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:22:38.541 06:15:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:22:38.541 06:15:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:22:38.541 06:15:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:22:38.541 06:15:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:38.541 Attaching 4 probes... 00:22:38.541 @path[10.0.0.3, 4420]: 19359 00:22:38.541 @path[10.0.0.3, 4420]: 19696 00:22:38.541 @path[10.0.0.3, 4420]: 19136 00:22:38.541 @path[10.0.0.3, 4420]: 18768 00:22:38.541 @path[10.0.0.3, 4420]: 18071 00:22:38.541 06:15:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:22:38.541 06:15:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:22:38.541 06:15:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:22:38.541 06:15:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:22:38.541 06:15:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:22:38.541 06:15:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:22:38.541 06:15:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 95658 00:22:38.541 06:15:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:38.541 06:15:03 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:22:38.541 [2024-10-01 06:15:04.030902] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:22:38.541 06:15:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:22:38.801 06:15:04 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@111 -- # sleep 6 00:22:45.369 06:15:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@112 -- # confirm_io_on_port optimized 4421 00:22:45.369 06:15:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=95836 00:22:45.369 06:15:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 94983 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:22:45.369 06:15:10 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:22:51.949 06:15:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:22:51.949 06:15:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:22:51.949 06:15:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:22:51.949 06:15:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:51.949 Attaching 4 probes... 00:22:51.949 @path[10.0.0.3, 4421]: 18575 00:22:51.949 @path[10.0.0.3, 4421]: 19214 00:22:51.949 @path[10.0.0.3, 4421]: 19004 00:22:51.949 @path[10.0.0.3, 4421]: 18956 00:22:51.949 @path[10.0.0.3, 4421]: 19135 00:22:51.949 06:15:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:22:51.949 06:15:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:22:51.949 06:15:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:22:51.949 06:15:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:22:51.949 06:15:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:22:51.949 06:15:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:22:51.949 06:15:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 95836 00:22:51.949 06:15:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:22:51.949 06:15:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@114 -- # killprocess 95025 00:22:51.949 06:15:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@950 -- # '[' -z 95025 ']' 00:22:51.949 06:15:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@954 -- # kill -0 95025 00:22:51.949 06:15:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@955 -- # uname 00:22:51.949 06:15:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:51.949 06:15:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 95025 00:22:51.949 06:15:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:22:51.949 06:15:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:22:51.949 killing process with pid 95025 00:22:51.949 06:15:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@968 -- # echo 'killing process with pid 95025' 00:22:51.949 06:15:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@969 -- # kill 95025 00:22:51.949 06:15:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@974 -- # wait 95025 00:22:51.949 { 00:22:51.949 "results": [ 00:22:51.949 { 00:22:51.949 "job": "Nvme0n1", 00:22:51.949 "core_mask": "0x4", 00:22:51.949 "workload": "verify", 00:22:51.949 "status": "terminated", 00:22:51.949 "verify_range": { 00:22:51.949 "start": 0, 00:22:51.949 "length": 16384 00:22:51.949 }, 00:22:51.949 "queue_depth": 128, 00:22:51.949 "io_size": 4096, 00:22:51.949 "runtime": 55.544942, 00:22:51.949 "iops": 8361.157348944571, 00:22:51.949 "mibps": 32.66077089431473, 00:22:51.949 "io_failed": 0, 00:22:51.949 "io_timeout": 0, 00:22:51.949 "avg_latency_us": 15283.821397152265, 00:22:51.949 "min_latency_us": 811.7527272727273, 00:22:51.949 "max_latency_us": 7046430.72 00:22:51.949 } 00:22:51.949 ], 00:22:51.949 "core_count": 1 00:22:51.949 } 00:22:51.949 06:15:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@116 -- # wait 95025 00:22:51.949 06:15:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@118 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:22:51.949 [2024-10-01 06:14:19.119441] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:22:51.949 [2024-10-01 06:14:19.119531] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95025 ] 00:22:51.949 [2024-10-01 06:14:19.256068] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:51.949 [2024-10-01 06:14:19.298291] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:22:51.949 [2024-10-01 06:14:19.332317] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:22:51.949 [2024-10-01 06:14:20.923033] bdev_nvme.c:5605:nvme_bdev_ctrlr_create: *WARNING*: multipath_config: deprecated feature bdev_nvme_attach_controller.multipath configuration mismatch to be removed in v25.01 00:22:51.949 Running I/O for 90 seconds... 00:22:51.949 8092.00 IOPS, 31.61 MiB/s 8901.00 IOPS, 34.77 MiB/s 9340.67 IOPS, 36.49 MiB/s 9560.50 IOPS, 37.35 MiB/s 9696.40 IOPS, 37.88 MiB/s 9808.33 IOPS, 38.31 MiB/s 9883.43 IOPS, 38.61 MiB/s 9904.25 IOPS, 38.69 MiB/s [2024-10-01 06:14:29.323486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:130640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.949 [2024-10-01 06:14:29.323870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:22:51.949 [2024-10-01 06:14:29.324066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:130648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.949 [2024-10-01 06:14:29.324162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:22:51.949 [2024-10-01 06:14:29.324324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:130656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.949 [2024-10-01 06:14:29.324404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:22:51.949 [2024-10-01 06:14:29.324482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:130664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.949 [2024-10-01 06:14:29.324556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:22:51.949 [2024-10-01 06:14:29.324630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:130672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.949 [2024-10-01 06:14:29.324705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:51.949 [2024-10-01 06:14:29.324786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:130680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.949 [2024-10-01 06:14:29.324860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:22:51.949 [2024-10-01 06:14:29.324950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:130688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.949 [2024-10-01 06:14:29.325055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:22:51.949 [2024-10-01 06:14:29.325141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:130696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.949 [2024-10-01 06:14:29.325220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:22:51.949 [2024-10-01 06:14:29.325299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:130064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.949 [2024-10-01 06:14:29.325378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:22:51.949 [2024-10-01 06:14:29.325485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:130072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.949 [2024-10-01 06:14:29.325560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:22:51.950 [2024-10-01 06:14:29.325644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:130080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.950 [2024-10-01 06:14:29.325720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:22:51.950 [2024-10-01 06:14:29.325797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:130088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.950 [2024-10-01 06:14:29.325866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:22:51.950 [2024-10-01 06:14:29.326032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:130096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.950 [2024-10-01 06:14:29.326143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:22:51.950 [2024-10-01 06:14:29.326233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:130104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.950 [2024-10-01 06:14:29.326314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:22:51.950 [2024-10-01 06:14:29.326428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:130112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.950 [2024-10-01 06:14:29.326515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:22:51.950 [2024-10-01 06:14:29.326598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:130120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.950 [2024-10-01 06:14:29.326674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:22:51.950 [2024-10-01 06:14:29.326766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:130704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.950 [2024-10-01 06:14:29.326845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:22:51.950 [2024-10-01 06:14:29.326934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:130712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.950 [2024-10-01 06:14:29.327031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:22:51.950 [2024-10-01 06:14:29.327111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:130720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.950 [2024-10-01 06:14:29.327184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:22:51.950 [2024-10-01 06:14:29.327275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:130728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.950 [2024-10-01 06:14:29.327352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:22:51.950 [2024-10-01 06:14:29.327429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:130736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.950 [2024-10-01 06:14:29.327508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:22:51.950 [2024-10-01 06:14:29.327606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:130744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.950 [2024-10-01 06:14:29.327683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:22:51.950 [2024-10-01 06:14:29.327771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:130752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.950 [2024-10-01 06:14:29.327854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:22:51.950 [2024-10-01 06:14:29.327981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:130760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.950 [2024-10-01 06:14:29.328069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:22:51.950 [2024-10-01 06:14:29.328150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:130128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.950 [2024-10-01 06:14:29.328265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:22:51.950 [2024-10-01 06:14:29.328363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:130136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.950 [2024-10-01 06:14:29.328442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:51.950 [2024-10-01 06:14:29.328520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:130144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.950 [2024-10-01 06:14:29.328593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:51.950 [2024-10-01 06:14:29.328676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:130152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.950 [2024-10-01 06:14:29.328748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:51.950 [2024-10-01 06:14:29.328829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:130160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.950 [2024-10-01 06:14:29.328923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:51.950 [2024-10-01 06:14:29.329025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:130168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.950 [2024-10-01 06:14:29.329112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:51.950 [2024-10-01 06:14:29.329197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:130176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.950 [2024-10-01 06:14:29.329271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:51.950 [2024-10-01 06:14:29.329369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:130184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.950 [2024-10-01 06:14:29.329394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:51.950 [2024-10-01 06:14:29.329414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:130192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.950 [2024-10-01 06:14:29.329428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:51.950 [2024-10-01 06:14:29.329457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:130200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.950 [2024-10-01 06:14:29.329472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:22:51.950 [2024-10-01 06:14:29.329491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:130208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.950 [2024-10-01 06:14:29.329504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:22:51.950 [2024-10-01 06:14:29.329523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:130216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.950 [2024-10-01 06:14:29.329536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:22:51.950 [2024-10-01 06:14:29.329554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:130224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.950 [2024-10-01 06:14:29.329567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:51.950 [2024-10-01 06:14:29.329586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:130232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.950 [2024-10-01 06:14:29.329598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:51.950 [2024-10-01 06:14:29.329617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:130240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.950 [2024-10-01 06:14:29.329629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:51.950 [2024-10-01 06:14:29.329648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:130248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.950 [2024-10-01 06:14:29.329660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:51.950 [2024-10-01 06:14:29.329679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:130256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.950 [2024-10-01 06:14:29.329692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:22:51.950 [2024-10-01 06:14:29.329710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:130264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.950 [2024-10-01 06:14:29.329723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:22:51.950 [2024-10-01 06:14:29.329741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:130272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.950 [2024-10-01 06:14:29.329754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:22:51.950 [2024-10-01 06:14:29.329772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:130280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.950 [2024-10-01 06:14:29.329785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:22:51.950 [2024-10-01 06:14:29.329803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:130288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.950 [2024-10-01 06:14:29.329816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:22:51.950 [2024-10-01 06:14:29.329834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:130296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.950 [2024-10-01 06:14:29.329854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:22:51.950 [2024-10-01 06:14:29.329873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:130304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.950 [2024-10-01 06:14:29.329886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:22:51.950 [2024-10-01 06:14:29.329905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:130312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.950 [2024-10-01 06:14:29.329938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:22:51.950 [2024-10-01 06:14:29.329960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:130768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.951 [2024-10-01 06:14:29.329974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:22:51.951 [2024-10-01 06:14:29.329993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:130776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.951 [2024-10-01 06:14:29.330006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:22:51.951 [2024-10-01 06:14:29.330025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:130784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.951 [2024-10-01 06:14:29.330037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:22:51.951 [2024-10-01 06:14:29.330056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:130792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.951 [2024-10-01 06:14:29.330069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:22:51.951 [2024-10-01 06:14:29.330087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:130800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.951 [2024-10-01 06:14:29.330100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:22:51.951 [2024-10-01 06:14:29.330119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:130808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.951 [2024-10-01 06:14:29.330132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:22:51.951 [2024-10-01 06:14:29.330150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:130816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.951 [2024-10-01 06:14:29.330163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:22:51.951 [2024-10-01 06:14:29.330181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:130824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.951 [2024-10-01 06:14:29.330194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:22:51.951 [2024-10-01 06:14:29.330213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:130320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.951 [2024-10-01 06:14:29.330226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:22:51.951 [2024-10-01 06:14:29.330244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:130328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.951 [2024-10-01 06:14:29.330264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:51.951 [2024-10-01 06:14:29.330284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:130336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.951 [2024-10-01 06:14:29.330297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:51.951 [2024-10-01 06:14:29.330316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:130344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.951 [2024-10-01 06:14:29.330328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:22:51.951 [2024-10-01 06:14:29.330347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:130352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.951 [2024-10-01 06:14:29.330360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:22:51.951 [2024-10-01 06:14:29.330379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:130360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.951 [2024-10-01 06:14:29.330391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:22:51.951 [2024-10-01 06:14:29.330409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:130368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.951 [2024-10-01 06:14:29.330422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:22:51.951 [2024-10-01 06:14:29.330441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:130376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.951 [2024-10-01 06:14:29.330455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:22:51.951 [2024-10-01 06:14:29.330474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:130384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.951 [2024-10-01 06:14:29.330487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:22:51.951 [2024-10-01 06:14:29.330505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:130392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.951 [2024-10-01 06:14:29.330518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:22:51.951 [2024-10-01 06:14:29.330536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:130400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.951 [2024-10-01 06:14:29.330549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:22:51.951 [2024-10-01 06:14:29.330568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:130408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.951 [2024-10-01 06:14:29.330580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:22:51.951 [2024-10-01 06:14:29.330599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:130416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.951 [2024-10-01 06:14:29.330611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:51.951 [2024-10-01 06:14:29.330630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:130424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.951 [2024-10-01 06:14:29.330642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:22:51.951 [2024-10-01 06:14:29.330667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:130432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.951 [2024-10-01 06:14:29.330681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:22:51.951 [2024-10-01 06:14:29.330699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:130440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.951 [2024-10-01 06:14:29.330712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:22:51.951 [2024-10-01 06:14:29.330730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:130832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.951 [2024-10-01 06:14:29.330743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:22:51.951 [2024-10-01 06:14:29.330761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:130840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.951 [2024-10-01 06:14:29.330774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:22:51.951 [2024-10-01 06:14:29.330793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:130848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.951 [2024-10-01 06:14:29.330805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:22:51.951 [2024-10-01 06:14:29.330824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:130856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.951 [2024-10-01 06:14:29.330837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:22:51.951 [2024-10-01 06:14:29.330855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:130864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.951 [2024-10-01 06:14:29.330868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:22:51.951 [2024-10-01 06:14:29.330886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:130872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.951 [2024-10-01 06:14:29.334213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:22:51.951 [2024-10-01 06:14:29.334334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:130880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.951 [2024-10-01 06:14:29.334434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:22:51.951 [2024-10-01 06:14:29.334518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:130888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.951 [2024-10-01 06:14:29.334596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:22:51.951 [2024-10-01 06:14:29.334681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:130896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.951 [2024-10-01 06:14:29.334755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:22:51.951 [2024-10-01 06:14:29.334832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:130904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.951 [2024-10-01 06:14:29.334948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:22:51.951 [2024-10-01 06:14:29.335075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:130912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.951 [2024-10-01 06:14:29.335164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:22:51.951 [2024-10-01 06:14:29.335248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:130920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.951 [2024-10-01 06:14:29.335360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:22:51.951 [2024-10-01 06:14:29.335441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:130928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.951 [2024-10-01 06:14:29.335515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:51.951 [2024-10-01 06:14:29.335601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:130936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.951 [2024-10-01 06:14:29.335681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:22:51.951 [2024-10-01 06:14:29.335763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:130944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.951 [2024-10-01 06:14:29.335843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:22:51.952 [2024-10-01 06:14:29.335995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:130952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.952 [2024-10-01 06:14:29.336090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:22:51.952 [2024-10-01 06:14:29.336183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:130960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.952 [2024-10-01 06:14:29.336339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:22:51.952 [2024-10-01 06:14:29.336427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:130968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.952 [2024-10-01 06:14:29.336493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:51.952 [2024-10-01 06:14:29.336574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:130976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.952 [2024-10-01 06:14:29.336652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:51.952 [2024-10-01 06:14:29.336729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:130984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.952 [2024-10-01 06:14:29.336801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:22:51.952 [2024-10-01 06:14:29.336878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:130448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.952 [2024-10-01 06:14:29.336978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:22:51.952 [2024-10-01 06:14:29.337091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:130456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.952 [2024-10-01 06:14:29.337171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:22:51.952 [2024-10-01 06:14:29.337284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:130464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.952 [2024-10-01 06:14:29.337373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:22:51.952 [2024-10-01 06:14:29.337449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:130472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.952 [2024-10-01 06:14:29.337522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:51.952 [2024-10-01 06:14:29.337603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:130480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.952 [2024-10-01 06:14:29.337676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:22:51.952 [2024-10-01 06:14:29.337752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:130488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.952 [2024-10-01 06:14:29.337824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:22:51.952 [2024-10-01 06:14:29.337919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:130496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.952 [2024-10-01 06:14:29.338044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:22:51.952 [2024-10-01 06:14:29.338135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:130504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.952 [2024-10-01 06:14:29.338205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:22:51.952 [2024-10-01 06:14:29.338320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:130512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.952 [2024-10-01 06:14:29.338399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:51.952 [2024-10-01 06:14:29.338486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:130520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.952 [2024-10-01 06:14:29.338565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:22:51.952 [2024-10-01 06:14:29.338643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:130528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.952 [2024-10-01 06:14:29.338706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:22:51.952 [2024-10-01 06:14:29.338738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:130536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.952 [2024-10-01 06:14:29.338754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:22:51.952 [2024-10-01 06:14:29.338774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:130544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.952 [2024-10-01 06:14:29.338788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:22:51.952 [2024-10-01 06:14:29.340453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:130552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.952 [2024-10-01 06:14:29.340496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:22:51.952 [2024-10-01 06:14:29.340528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:130560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.952 [2024-10-01 06:14:29.340557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:22:51.952 [2024-10-01 06:14:29.340580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:130568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.952 [2024-10-01 06:14:29.340596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:22:51.952 [2024-10-01 06:14:29.340617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:130992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.952 [2024-10-01 06:14:29.340631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:51.952 [2024-10-01 06:14:29.340667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:131000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.952 [2024-10-01 06:14:29.340681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:51.952 [2024-10-01 06:14:29.340701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:131008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.952 [2024-10-01 06:14:29.340715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:51.952 [2024-10-01 06:14:29.340735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:131016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.952 [2024-10-01 06:14:29.340748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:51.952 [2024-10-01 06:14:29.340784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:131024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.952 [2024-10-01 06:14:29.340798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:51.952 [2024-10-01 06:14:29.340819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:131032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.952 [2024-10-01 06:14:29.340834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:22:51.952 [2024-10-01 06:14:29.340855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:131040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.952 [2024-10-01 06:14:29.340869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:22:51.952 [2024-10-01 06:14:29.341068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:131048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.952 [2024-10-01 06:14:29.341095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:22:51.952 [2024-10-01 06:14:29.341123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:131056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.952 [2024-10-01 06:14:29.341140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:22:51.952 [2024-10-01 06:14:29.341161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:131064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.952 [2024-10-01 06:14:29.341176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:22:51.952 [2024-10-01 06:14:29.341197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:0 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.952 [2024-10-01 06:14:29.341238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:51.952 [2024-10-01 06:14:29.341277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:8 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.952 [2024-10-01 06:14:29.341309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:51.952 [2024-10-01 06:14:29.341332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:130576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.952 [2024-10-01 06:14:29.341348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.952 [2024-10-01 06:14:29.341370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:130584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.952 [2024-10-01 06:14:29.341387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.952 [2024-10-01 06:14:29.341409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:130592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.952 [2024-10-01 06:14:29.341425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:51.952 [2024-10-01 06:14:29.341447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:130600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.952 [2024-10-01 06:14:29.341463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:22:51.952 [2024-10-01 06:14:29.341486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:130608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.952 [2024-10-01 06:14:29.341501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:22:51.952 [2024-10-01 06:14:29.341523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:130616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.952 [2024-10-01 06:14:29.341539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:22:51.953 [2024-10-01 06:14:29.341561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:130624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.953 [2024-10-01 06:14:29.341576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:22:51.953 [2024-10-01 06:14:29.341599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:130632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.953 [2024-10-01 06:14:29.341615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:22:51.953 9880.33 IOPS, 38.60 MiB/s 9915.50 IOPS, 38.73 MiB/s 9945.73 IOPS, 38.85 MiB/s 9983.25 IOPS, 39.00 MiB/s 10010.08 IOPS, 39.10 MiB/s 10027.64 IOPS, 39.17 MiB/s [2024-10-01 06:14:35.910256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:11288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.953 [2024-10-01 06:14:35.910307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:22:51.953 [2024-10-01 06:14:35.910373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:11296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.953 [2024-10-01 06:14:35.910393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:22:51.953 [2024-10-01 06:14:35.910436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:11304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.953 [2024-10-01 06:14:35.910452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:22:51.953 [2024-10-01 06:14:35.910471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:11312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.953 [2024-10-01 06:14:35.910484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:22:51.953 [2024-10-01 06:14:35.910502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:11320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.953 [2024-10-01 06:14:35.910515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:22:51.953 [2024-10-01 06:14:35.910534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:11328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.953 [2024-10-01 06:14:35.910547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:22:51.953 [2024-10-01 06:14:35.910566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:11336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.953 [2024-10-01 06:14:35.910579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:22:51.953 [2024-10-01 06:14:35.910597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:11344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.953 [2024-10-01 06:14:35.910610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:22:51.953 [2024-10-01 06:14:35.910628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.953 [2024-10-01 06:14:35.910641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:22:51.953 [2024-10-01 06:14:35.910660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:10912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.953 [2024-10-01 06:14:35.910673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:22:51.953 [2024-10-01 06:14:35.910692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:10920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.953 [2024-10-01 06:14:35.910704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:51.953 [2024-10-01 06:14:35.910723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.953 [2024-10-01 06:14:35.910736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:22:51.953 [2024-10-01 06:14:35.910754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:10936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.953 [2024-10-01 06:14:35.910766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:22:51.953 [2024-10-01 06:14:35.910784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:10944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.953 [2024-10-01 06:14:35.910797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:22:51.953 [2024-10-01 06:14:35.910816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:10952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.953 [2024-10-01 06:14:35.910837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:22:51.953 [2024-10-01 06:14:35.910856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:10960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.953 [2024-10-01 06:14:35.910870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:51.953 [2024-10-01 06:14:35.910908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:11352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.953 [2024-10-01 06:14:35.910941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:51.953 [2024-10-01 06:14:35.910963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:11360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.953 [2024-10-01 06:14:35.910977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:22:51.953 [2024-10-01 06:14:35.910996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:11368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.953 [2024-10-01 06:14:35.911010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:22:51.953 [2024-10-01 06:14:35.911028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:11376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.953 [2024-10-01 06:14:35.911041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:22:51.953 [2024-10-01 06:14:35.911059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:11384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.953 [2024-10-01 06:14:35.911072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:22:51.953 [2024-10-01 06:14:35.911091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:11392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.953 [2024-10-01 06:14:35.911104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:51.953 [2024-10-01 06:14:35.911122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:11400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.953 [2024-10-01 06:14:35.911136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:22:51.953 [2024-10-01 06:14:35.911154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:11408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.953 [2024-10-01 06:14:35.911168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:22:51.953 [2024-10-01 06:14:35.911187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:11416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.953 [2024-10-01 06:14:35.911200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:22:51.953 [2024-10-01 06:14:35.911218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:11424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.953 [2024-10-01 06:14:35.911231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:22:51.953 [2024-10-01 06:14:35.911249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:11432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.953 [2024-10-01 06:14:35.911271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:51.953 [2024-10-01 06:14:35.911303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:11440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.953 [2024-10-01 06:14:35.911317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:22:51.953 [2024-10-01 06:14:35.911336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:11448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.953 [2024-10-01 06:14:35.911349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:22:51.953 [2024-10-01 06:14:35.911367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:11456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.953 [2024-10-01 06:14:35.911381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:22:51.954 [2024-10-01 06:14:35.911399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:11464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.954 [2024-10-01 06:14:35.911411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:22:51.954 [2024-10-01 06:14:35.911430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:11472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.954 [2024-10-01 06:14:35.911443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:22:51.954 [2024-10-01 06:14:35.911462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:11480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.954 [2024-10-01 06:14:35.911475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:22:51.954 [2024-10-01 06:14:35.911495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:11488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.954 [2024-10-01 06:14:35.911508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:22:51.954 [2024-10-01 06:14:35.911527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:11496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.954 [2024-10-01 06:14:35.911540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:51.954 [2024-10-01 06:14:35.911559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.954 [2024-10-01 06:14:35.911572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:51.954 [2024-10-01 06:14:35.911590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:11512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.954 [2024-10-01 06:14:35.911603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:51.954 [2024-10-01 06:14:35.911622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:11520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.954 [2024-10-01 06:14:35.911635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:51.954 [2024-10-01 06:14:35.911653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:11528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.954 [2024-10-01 06:14:35.911666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:51.954 [2024-10-01 06:14:35.911692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:11536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.954 [2024-10-01 06:14:35.911706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:22:51.954 [2024-10-01 06:14:35.911724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:10968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.954 [2024-10-01 06:14:35.911737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:22:51.954 [2024-10-01 06:14:35.911756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:10976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.954 [2024-10-01 06:14:35.911769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:22:51.954 [2024-10-01 06:14:35.911788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:10984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.954 [2024-10-01 06:14:35.911801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:22:51.954 [2024-10-01 06:14:35.911819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:10992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.954 [2024-10-01 06:14:35.911832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:22:51.954 [2024-10-01 06:14:35.911851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:11000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.954 [2024-10-01 06:14:35.911863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:51.954 [2024-10-01 06:14:35.911882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.954 [2024-10-01 06:14:35.911895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:51.954 [2024-10-01 06:14:35.911971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.954 [2024-10-01 06:14:35.912002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.954 [2024-10-01 06:14:35.912023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:11024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.954 [2024-10-01 06:14:35.912038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.954 [2024-10-01 06:14:35.912062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:11544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.954 [2024-10-01 06:14:35.912079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:51.954 [2024-10-01 06:14:35.912100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:11552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.954 [2024-10-01 06:14:35.912115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:22:51.954 [2024-10-01 06:14:35.912135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:11560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.954 [2024-10-01 06:14:35.912150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:22:51.954 [2024-10-01 06:14:35.912181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:11568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.954 [2024-10-01 06:14:35.912197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:22:51.954 [2024-10-01 06:14:35.912217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:11576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.954 [2024-10-01 06:14:35.912232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:22:51.954 [2024-10-01 06:14:35.912252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:11584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.954 [2024-10-01 06:14:35.912266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:22:51.954 [2024-10-01 06:14:35.912315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:11592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.954 [2024-10-01 06:14:35.912343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:22:51.954 [2024-10-01 06:14:35.912362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:11600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.954 [2024-10-01 06:14:35.912375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:22:51.954 [2024-10-01 06:14:35.912393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:11608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.954 [2024-10-01 06:14:35.912406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:22:51.954 [2024-10-01 06:14:35.912425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:11616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.954 [2024-10-01 06:14:35.912438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:22:51.954 [2024-10-01 06:14:35.912456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:11624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.954 [2024-10-01 06:14:35.912469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:51.954 [2024-10-01 06:14:35.912488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:11632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.954 [2024-10-01 06:14:35.912501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:22:51.954 [2024-10-01 06:14:35.912519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:11640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.954 [2024-10-01 06:14:35.912533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:22:51.954 [2024-10-01 06:14:35.912552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:11648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.954 [2024-10-01 06:14:35.912564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:22:51.954 [2024-10-01 06:14:35.912583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:11656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.954 [2024-10-01 06:14:35.912596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:22:51.954 [2024-10-01 06:14:35.912614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:11664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.954 [2024-10-01 06:14:35.912634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:22:51.954 [2024-10-01 06:14:35.912653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:11032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.954 [2024-10-01 06:14:35.912668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:22:51.954 [2024-10-01 06:14:35.912687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:11040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.954 [2024-10-01 06:14:35.912700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:22:51.954 [2024-10-01 06:14:35.912719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:11048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.954 [2024-10-01 06:14:35.912732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:22:51.954 [2024-10-01 06:14:35.912750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:11056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.954 [2024-10-01 06:14:35.912763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:22:51.954 [2024-10-01 06:14:35.912781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:11064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.954 [2024-10-01 06:14:35.912794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:22:51.955 [2024-10-01 06:14:35.912813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:11072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.955 [2024-10-01 06:14:35.912826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:22:51.955 [2024-10-01 06:14:35.912844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:11080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.955 [2024-10-01 06:14:35.912857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:22:51.955 [2024-10-01 06:14:35.912875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:11088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.955 [2024-10-01 06:14:35.912888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:22:51.955 [2024-10-01 06:14:35.912907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.955 [2024-10-01 06:14:35.912919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:22:51.955 [2024-10-01 06:14:35.912938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:11104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.955 [2024-10-01 06:14:35.912951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:22:51.955 [2024-10-01 06:14:35.912969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:11112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.955 [2024-10-01 06:14:35.912999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:22:51.955 [2024-10-01 06:14:35.913019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:11120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.955 [2024-10-01 06:14:35.913039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:22:51.955 [2024-10-01 06:14:35.913060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:11128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.955 [2024-10-01 06:14:35.913087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:22:51.955 [2024-10-01 06:14:35.913109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:11136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.955 [2024-10-01 06:14:35.913122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:22:51.955 [2024-10-01 06:14:35.913145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:11144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.955 [2024-10-01 06:14:35.913159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:22:51.955 [2024-10-01 06:14:35.913178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:11152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.955 [2024-10-01 06:14:35.913192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:51.955 [2024-10-01 06:14:35.913210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.955 [2024-10-01 06:14:35.913223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:51.955 [2024-10-01 06:14:35.913242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.955 [2024-10-01 06:14:35.913255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:51.955 [2024-10-01 06:14:35.913274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:11176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.955 [2024-10-01 06:14:35.913286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:51.955 [2024-10-01 06:14:35.913305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:11184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.955 [2024-10-01 06:14:35.913318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:51.955 [2024-10-01 06:14:35.913336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:11192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.955 [2024-10-01 06:14:35.913349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:51.955 [2024-10-01 06:14:35.913368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:11200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.955 [2024-10-01 06:14:35.913381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:51.955 [2024-10-01 06:14:35.913399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:11208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.955 [2024-10-01 06:14:35.913412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:51.955 [2024-10-01 06:14:35.913431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:11216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.955 [2024-10-01 06:14:35.913444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:22:51.955 [2024-10-01 06:14:35.913473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:11672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.955 [2024-10-01 06:14:35.913489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:22:51.955 [2024-10-01 06:14:35.913507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:11680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.955 [2024-10-01 06:14:35.913521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:22:51.955 [2024-10-01 06:14:35.913540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:11688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.955 [2024-10-01 06:14:35.913552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:51.955 [2024-10-01 06:14:35.913571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:11696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.955 [2024-10-01 06:14:35.913583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:51.955 [2024-10-01 06:14:35.913602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:11704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.955 [2024-10-01 06:14:35.913619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:51.955 [2024-10-01 06:14:35.913639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:11712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.955 [2024-10-01 06:14:35.913653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:51.955 [2024-10-01 06:14:35.913671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:11720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.955 [2024-10-01 06:14:35.913684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:22:51.955 [2024-10-01 06:14:35.913703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:11728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.955 [2024-10-01 06:14:35.913716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:22:51.955 [2024-10-01 06:14:35.913734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:11736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.955 [2024-10-01 06:14:35.913747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:22:51.955 [2024-10-01 06:14:35.913765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:11744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.955 [2024-10-01 06:14:35.913778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:22:51.955 [2024-10-01 06:14:35.913797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:11752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.955 [2024-10-01 06:14:35.913826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:22:51.955 [2024-10-01 06:14:35.913845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:11760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.955 [2024-10-01 06:14:35.913858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:22:51.955 [2024-10-01 06:14:35.913884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:11768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.955 [2024-10-01 06:14:35.913899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:22:51.955 [2024-10-01 06:14:35.913929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:11776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.955 [2024-10-01 06:14:35.913946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:22:51.955 [2024-10-01 06:14:35.913965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:11224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.955 [2024-10-01 06:14:35.913978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:22:51.955 [2024-10-01 06:14:35.913998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:11232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.955 [2024-10-01 06:14:35.914012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:22:51.955 [2024-10-01 06:14:35.914031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:11240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.955 [2024-10-01 06:14:35.914045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:22:51.955 [2024-10-01 06:14:35.914063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:11248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.955 [2024-10-01 06:14:35.914077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:22:51.955 [2024-10-01 06:14:35.914096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:11256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.955 [2024-10-01 06:14:35.914109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:22:51.955 [2024-10-01 06:14:35.914128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.955 [2024-10-01 06:14:35.914142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:22:51.955 [2024-10-01 06:14:35.914161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:11272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.956 [2024-10-01 06:14:35.914176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:22:51.956 [2024-10-01 06:14:35.914855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:11280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.956 [2024-10-01 06:14:35.914882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:22:51.956 [2024-10-01 06:14:35.914927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:11784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.956 [2024-10-01 06:14:35.914946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:22:51.956 [2024-10-01 06:14:35.914971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:11792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.956 [2024-10-01 06:14:35.914986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:51.956 [2024-10-01 06:14:35.915011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:11800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.956 [2024-10-01 06:14:35.915036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:51.956 [2024-10-01 06:14:35.915062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:11808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.956 [2024-10-01 06:14:35.915077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:22:51.956 [2024-10-01 06:14:35.915101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:11816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.956 [2024-10-01 06:14:35.915115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:22:51.956 [2024-10-01 06:14:35.915140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:11824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.956 [2024-10-01 06:14:35.915153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:22:51.956 [2024-10-01 06:14:35.915179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:11832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.956 [2024-10-01 06:14:35.915192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:22:51.956 [2024-10-01 06:14:35.915232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:11840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.956 [2024-10-01 06:14:35.915251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:22:51.956 [2024-10-01 06:14:35.915277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:11848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.956 [2024-10-01 06:14:35.915292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:22:51.956 [2024-10-01 06:14:35.915316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:11856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.956 [2024-10-01 06:14:35.915330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:22:51.956 [2024-10-01 06:14:35.915361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:11864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.956 [2024-10-01 06:14:35.915376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:22:51.956 [2024-10-01 06:14:35.915401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:11872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.956 [2024-10-01 06:14:35.915415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:22:51.956 [2024-10-01 06:14:35.915440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:11880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.956 [2024-10-01 06:14:35.915454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:51.956 [2024-10-01 06:14:35.915479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:11888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.956 [2024-10-01 06:14:35.915493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:22:51.956 [2024-10-01 06:14:35.915518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:11896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.956 [2024-10-01 06:14:35.915537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:22:51.956 [2024-10-01 06:14:35.915570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:11904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.956 [2024-10-01 06:14:35.915586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:22:51.956 [2024-10-01 06:14:35.915612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:11912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.956 [2024-10-01 06:14:35.915626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:22:51.956 [2024-10-01 06:14:35.915651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:11920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.956 [2024-10-01 06:14:35.915665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:22:51.956 9921.27 IOPS, 38.75 MiB/s 9399.75 IOPS, 36.72 MiB/s 9446.35 IOPS, 36.90 MiB/s 9486.00 IOPS, 37.05 MiB/s 9515.16 IOPS, 37.17 MiB/s 9547.40 IOPS, 37.29 MiB/s 9576.19 IOPS, 37.41 MiB/s [2024-10-01 06:14:43.058039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:113760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.956 [2024-10-01 06:14:43.058561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:22:51.956 [2024-10-01 06:14:43.058697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:113768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.956 [2024-10-01 06:14:43.058791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:22:51.956 [2024-10-01 06:14:43.058876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:113776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.956 [2024-10-01 06:14:43.059002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:51.956 [2024-10-01 06:14:43.059093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:113784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.956 [2024-10-01 06:14:43.059176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:22:51.956 [2024-10-01 06:14:43.059262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:113792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.956 [2024-10-01 06:14:43.059369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:22:51.956 [2024-10-01 06:14:43.059454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:113800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.956 [2024-10-01 06:14:43.059537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:22:51.956 [2024-10-01 06:14:43.059618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:113808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.956 [2024-10-01 06:14:43.059692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:22:51.956 [2024-10-01 06:14:43.059774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:113816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.956 [2024-10-01 06:14:43.059849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:22:51.956 [2024-10-01 06:14:43.060061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:113248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.956 [2024-10-01 06:14:43.060147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:22:51.956 [2024-10-01 06:14:43.060230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:113256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.956 [2024-10-01 06:14:43.060343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:22:51.956 [2024-10-01 06:14:43.060423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:113264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.956 [2024-10-01 06:14:43.060501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:22:51.956 [2024-10-01 06:14:43.060581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:113272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.956 [2024-10-01 06:14:43.060656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:22:51.956 [2024-10-01 06:14:43.060734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:113280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.956 [2024-10-01 06:14:43.060810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:22:51.956 [2024-10-01 06:14:43.060887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:113288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.956 [2024-10-01 06:14:43.060979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:22:51.956 [2024-10-01 06:14:43.061065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:113296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.956 [2024-10-01 06:14:43.061153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:22:51.956 [2024-10-01 06:14:43.061236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:113304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.956 [2024-10-01 06:14:43.061310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:22:51.956 [2024-10-01 06:14:43.061388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:113312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.956 [2024-10-01 06:14:43.061466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:22:51.956 [2024-10-01 06:14:43.061552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:113320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.956 [2024-10-01 06:14:43.061626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:22:51.956 [2024-10-01 06:14:43.061702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:113328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.957 [2024-10-01 06:14:43.061777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:22:51.957 [2024-10-01 06:14:43.061863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:113336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.957 [2024-10-01 06:14:43.061984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:22:51.957 [2024-10-01 06:14:43.062075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:113344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.957 [2024-10-01 06:14:43.062167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:22:51.957 [2024-10-01 06:14:43.062251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:113352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.957 [2024-10-01 06:14:43.062343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:22:51.957 [2024-10-01 06:14:43.062432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:113360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.957 [2024-10-01 06:14:43.062506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:22:51.957 [2024-10-01 06:14:43.062586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:113368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.957 [2024-10-01 06:14:43.062665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:51.957 [2024-10-01 06:14:43.062753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:113824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.957 [2024-10-01 06:14:43.062831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:51.957 [2024-10-01 06:14:43.062931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:113832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.957 [2024-10-01 06:14:43.063022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:51.957 [2024-10-01 06:14:43.063110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:113840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.957 [2024-10-01 06:14:43.063185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:51.957 [2024-10-01 06:14:43.063269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:113848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.957 [2024-10-01 06:14:43.063352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:51.957 [2024-10-01 06:14:43.063463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:113856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.957 [2024-10-01 06:14:43.063551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:51.957 [2024-10-01 06:14:43.063636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:113864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.957 [2024-10-01 06:14:43.063715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:51.957 [2024-10-01 06:14:43.063831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:113872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.957 [2024-10-01 06:14:43.063948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:51.957 [2024-10-01 06:14:43.064049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:113880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.957 [2024-10-01 06:14:43.064133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:22:51.957 [2024-10-01 06:14:43.064214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:113376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.957 [2024-10-01 06:14:43.064352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:22:51.957 [2024-10-01 06:14:43.064438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:113384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.957 [2024-10-01 06:14:43.064511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:22:51.957 [2024-10-01 06:14:43.064590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:113392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.957 [2024-10-01 06:14:43.064661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:51.957 [2024-10-01 06:14:43.064746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:113400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.957 [2024-10-01 06:14:43.064824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:51.957 [2024-10-01 06:14:43.064905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:113408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.957 [2024-10-01 06:14:43.065001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:51.957 [2024-10-01 06:14:43.065089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:113416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.957 [2024-10-01 06:14:43.065166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:51.957 [2024-10-01 06:14:43.065248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:113424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.957 [2024-10-01 06:14:43.065326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:22:51.957 [2024-10-01 06:14:43.065403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:113432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.957 [2024-10-01 06:14:43.065474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:22:51.957 [2024-10-01 06:14:43.065570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:113888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.957 [2024-10-01 06:14:43.065652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:22:51.957 [2024-10-01 06:14:43.065741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:113896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.957 [2024-10-01 06:14:43.065824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:22:51.957 [2024-10-01 06:14:43.065936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:113904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.957 [2024-10-01 06:14:43.066039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:22:51.957 [2024-10-01 06:14:43.066127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:113912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.957 [2024-10-01 06:14:43.066211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:22:51.957 [2024-10-01 06:14:43.066298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:113920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.957 [2024-10-01 06:14:43.066385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:22:51.957 [2024-10-01 06:14:43.066480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:113928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.957 [2024-10-01 06:14:43.066565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:22:51.957 [2024-10-01 06:14:43.066647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:113936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.957 [2024-10-01 06:14:43.066671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:22:51.957 [2024-10-01 06:14:43.066691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:113944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.957 [2024-10-01 06:14:43.066705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:22:51.957 [2024-10-01 06:14:43.066723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:113952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.957 [2024-10-01 06:14:43.066736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:22:51.957 [2024-10-01 06:14:43.066755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:113960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.957 [2024-10-01 06:14:43.066768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:22:51.957 [2024-10-01 06:14:43.066787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:113968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.957 [2024-10-01 06:14:43.066800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:22:51.957 [2024-10-01 06:14:43.066818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:113976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.957 [2024-10-01 06:14:43.066831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:22:51.957 [2024-10-01 06:14:43.066849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:113984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.957 [2024-10-01 06:14:43.066862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:22:51.957 [2024-10-01 06:14:43.066880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:113992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.957 [2024-10-01 06:14:43.066893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:22:51.957 [2024-10-01 06:14:43.066950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:114000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.957 [2024-10-01 06:14:43.066971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:22:51.957 [2024-10-01 06:14:43.066992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:114008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.957 [2024-10-01 06:14:43.067006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:51.957 [2024-10-01 06:14:43.067025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:113440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.957 [2024-10-01 06:14:43.067038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:51.957 [2024-10-01 06:14:43.067068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:113448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.958 [2024-10-01 06:14:43.067083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:22:51.958 [2024-10-01 06:14:43.067102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:113456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.958 [2024-10-01 06:14:43.067115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:22:51.958 [2024-10-01 06:14:43.067134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:113464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.958 [2024-10-01 06:14:43.067147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:22:51.958 [2024-10-01 06:14:43.067166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:113472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.958 [2024-10-01 06:14:43.067179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:22:51.958 [2024-10-01 06:14:43.067198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:113480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.958 [2024-10-01 06:14:43.067211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:22:51.958 [2024-10-01 06:14:43.067229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:113488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.958 [2024-10-01 06:14:43.067245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:22:51.958 [2024-10-01 06:14:43.067264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:113496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.958 [2024-10-01 06:14:43.067277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:22:51.958 [2024-10-01 06:14:43.067310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:113504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.958 [2024-10-01 06:14:43.067323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:22:51.958 [2024-10-01 06:14:43.067341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:113512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.958 [2024-10-01 06:14:43.067354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:22:51.958 [2024-10-01 06:14:43.067372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:113520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.958 [2024-10-01 06:14:43.067385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:51.958 [2024-10-01 06:14:43.067403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:113528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.958 [2024-10-01 06:14:43.067415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:22:51.958 [2024-10-01 06:14:43.067434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:113536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.958 [2024-10-01 06:14:43.067447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:22:51.958 [2024-10-01 06:14:43.067472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:113544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.958 [2024-10-01 06:14:43.067486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:22:51.958 [2024-10-01 06:14:43.067504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:113552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.958 [2024-10-01 06:14:43.067517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:22:51.958 [2024-10-01 06:14:43.067536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:113560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.958 [2024-10-01 06:14:43.067548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:22:51.958 [2024-10-01 06:14:43.067566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:113568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.958 [2024-10-01 06:14:43.067579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:22:51.958 [2024-10-01 06:14:43.067597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:113576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.958 [2024-10-01 06:14:43.067610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:22:51.958 [2024-10-01 06:14:43.067629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:113584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.958 [2024-10-01 06:14:43.067642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:22:51.958 [2024-10-01 06:14:43.067660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:113592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.958 [2024-10-01 06:14:43.067672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:22:51.958 [2024-10-01 06:14:43.067691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:113600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.958 [2024-10-01 06:14:43.067703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:22:51.958 [2024-10-01 06:14:43.067722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:113608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.958 [2024-10-01 06:14:43.067734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:22:51.958 [2024-10-01 06:14:43.067753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:113616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.958 [2024-10-01 06:14:43.067766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:22:51.958 [2024-10-01 06:14:43.067784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:113624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.958 [2024-10-01 06:14:43.067797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:22:51.958 [2024-10-01 06:14:43.067815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:114016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.958 [2024-10-01 06:14:43.067828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:22:51.958 [2024-10-01 06:14:43.067846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:114024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.958 [2024-10-01 06:14:43.067865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:22:51.958 [2024-10-01 06:14:43.067884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:114032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.958 [2024-10-01 06:14:43.067898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:51.958 [2024-10-01 06:14:43.067975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:114040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.958 [2024-10-01 06:14:43.067992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:22:51.958 [2024-10-01 06:14:43.068012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:114048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.958 [2024-10-01 06:14:43.068026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:22:51.958 [2024-10-01 06:14:43.068046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:114056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.958 [2024-10-01 06:14:43.068066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:22:51.958 [2024-10-01 06:14:43.068086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:114064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.958 [2024-10-01 06:14:43.068100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:22:51.958 [2024-10-01 06:14:43.068120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:114072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.958 [2024-10-01 06:14:43.068134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:51.958 [2024-10-01 06:14:43.068153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:113632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.958 [2024-10-01 06:14:43.068167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:51.958 [2024-10-01 06:14:43.068187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:113640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.958 [2024-10-01 06:14:43.068201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:22:51.958 [2024-10-01 06:14:43.068237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:113648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.958 [2024-10-01 06:14:43.068265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:22:51.958 [2024-10-01 06:14:43.068298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:113656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.959 [2024-10-01 06:14:43.068312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:22:51.959 [2024-10-01 06:14:43.068330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:113664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.959 [2024-10-01 06:14:43.068343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:22:51.959 [2024-10-01 06:14:43.068361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:113672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.959 [2024-10-01 06:14:43.068381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:51.959 [2024-10-01 06:14:43.068402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:113680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.959 [2024-10-01 06:14:43.068415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:22:51.959 [2024-10-01 06:14:43.068434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:113688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.959 [2024-10-01 06:14:43.068447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:22:51.959 [2024-10-01 06:14:43.068465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:113696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.959 [2024-10-01 06:14:43.068478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:22:51.959 [2024-10-01 06:14:43.068496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:113704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.959 [2024-10-01 06:14:43.068508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:22:51.959 [2024-10-01 06:14:43.068526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:113712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.959 [2024-10-01 06:14:43.068539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:51.959 [2024-10-01 06:14:43.068557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:113720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.959 [2024-10-01 06:14:43.068570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:22:51.959 [2024-10-01 06:14:43.068588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:113728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.959 [2024-10-01 06:14:43.068601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:22:51.959 [2024-10-01 06:14:43.068619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:113736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.959 [2024-10-01 06:14:43.068632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:22:51.959 [2024-10-01 06:14:43.068650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:113744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.959 [2024-10-01 06:14:43.068663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:22:51.959 [2024-10-01 06:14:43.073170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:113752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.959 [2024-10-01 06:14:43.073275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:22:51.959 [2024-10-01 06:14:43.073384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:114080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.959 [2024-10-01 06:14:43.073465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:22:51.959 [2024-10-01 06:14:43.073555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:114088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.959 [2024-10-01 06:14:43.073667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:22:51.959 [2024-10-01 06:14:43.073753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:114096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.959 [2024-10-01 06:14:43.073829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:51.959 [2024-10-01 06:14:43.073931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:114104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.959 [2024-10-01 06:14:43.074041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:51.959 [2024-10-01 06:14:43.074122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:114112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.959 [2024-10-01 06:14:43.074213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:51.959 [2024-10-01 06:14:43.074333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:114120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.959 [2024-10-01 06:14:43.074410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:51.959 [2024-10-01 06:14:43.074571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:114128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.959 [2024-10-01 06:14:43.074664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:51.959 [2024-10-01 06:14:43.074783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:114136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.959 [2024-10-01 06:14:43.074904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:22:51.959 [2024-10-01 06:14:43.075062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:114144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.959 [2024-10-01 06:14:43.075196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:22:51.959 [2024-10-01 06:14:43.075308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:114152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.959 [2024-10-01 06:14:43.075402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:22:51.959 [2024-10-01 06:14:43.075506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:114160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.959 [2024-10-01 06:14:43.075604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:22:51.959 [2024-10-01 06:14:43.075696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:114168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.959 [2024-10-01 06:14:43.075794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:22:51.959 [2024-10-01 06:14:43.075958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:114176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.959 [2024-10-01 06:14:43.076062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:51.959 [2024-10-01 06:14:43.076165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:114184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.959 [2024-10-01 06:14:43.076266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:51.959 [2024-10-01 06:14:43.076412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:114192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.959 [2024-10-01 06:14:43.076501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.959 [2024-10-01 06:14:43.076611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:114200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.959 [2024-10-01 06:14:43.076707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.959 [2024-10-01 06:14:43.076796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:114208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.959 [2024-10-01 06:14:43.076879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:51.959 [2024-10-01 06:14:43.077008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:114216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.959 [2024-10-01 06:14:43.077103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:22:51.959 [2024-10-01 06:14:43.077201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:114224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.959 [2024-10-01 06:14:43.077328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:22:51.959 [2024-10-01 06:14:43.077418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:114232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.959 [2024-10-01 06:14:43.077494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:22:51.959 [2024-10-01 06:14:43.077580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:114240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.959 [2024-10-01 06:14:43.077663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:22:51.959 [2024-10-01 06:14:43.077749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:114248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.959 [2024-10-01 06:14:43.077824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:22:51.959 [2024-10-01 06:14:43.077933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:114256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.959 [2024-10-01 06:14:43.078039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:22:51.959 [2024-10-01 06:14:43.078138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:114264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.959 [2024-10-01 06:14:43.078243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:22:51.959 9580.18 IOPS, 37.42 MiB/s 9163.65 IOPS, 35.80 MiB/s 8781.83 IOPS, 34.30 MiB/s 8430.56 IOPS, 32.93 MiB/s 8106.31 IOPS, 31.67 MiB/s 7806.07 IOPS, 30.49 MiB/s 7527.29 IOPS, 29.40 MiB/s 7269.31 IOPS, 28.40 MiB/s 7340.37 IOPS, 28.67 MiB/s 7411.71 IOPS, 28.95 MiB/s 7481.59 IOPS, 29.22 MiB/s 7545.30 IOPS, 29.47 MiB/s 7614.91 IOPS, 29.75 MiB/s 7676.09 IOPS, 29.98 MiB/s [2024-10-01 06:14:56.446569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:79832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.959 [2024-10-01 06:14:56.446632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:22:51.960 [2024-10-01 06:14:56.446692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:79840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.960 [2024-10-01 06:14:56.446730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:22:51.960 [2024-10-01 06:14:56.446752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:79848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.960 [2024-10-01 06:14:56.446767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:22:51.960 [2024-10-01 06:14:56.446788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:79856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.960 [2024-10-01 06:14:56.446819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:22:51.960 [2024-10-01 06:14:56.446840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:79864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.960 [2024-10-01 06:14:56.446855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:51.960 [2024-10-01 06:14:56.446876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:79872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.960 [2024-10-01 06:14:56.446891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:51.960 [2024-10-01 06:14:56.446922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:79880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.960 [2024-10-01 06:14:56.446940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:51.960 [2024-10-01 06:14:56.446975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:79888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.960 [2024-10-01 06:14:56.447033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:51.960 [2024-10-01 06:14:56.447099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:79384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.960 [2024-10-01 06:14:56.447132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:51.960 [2024-10-01 06:14:56.447153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:79392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.960 [2024-10-01 06:14:56.447169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:22:51.960 [2024-10-01 06:14:56.447204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:79400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.960 [2024-10-01 06:14:56.447219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:22:51.960 [2024-10-01 06:14:56.447240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:79408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.960 [2024-10-01 06:14:56.447254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:22:51.960 [2024-10-01 06:14:56.447275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:79416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.960 [2024-10-01 06:14:56.447289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:22:51.960 [2024-10-01 06:14:56.447335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:79424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.960 [2024-10-01 06:14:56.447350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:22:51.960 [2024-10-01 06:14:56.447370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:79432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.960 [2024-10-01 06:14:56.447385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:51.960 [2024-10-01 06:14:56.447421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:79440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.960 [2024-10-01 06:14:56.447436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:51.960 [2024-10-01 06:14:56.447456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:79448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.960 [2024-10-01 06:14:56.447472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.960 [2024-10-01 06:14:56.447493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:79456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.960 [2024-10-01 06:14:56.447508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:51.960 [2024-10-01 06:14:56.447528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:79464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.960 [2024-10-01 06:14:56.447542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:51.960 [2024-10-01 06:14:56.447563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:79472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.960 [2024-10-01 06:14:56.447577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:22:51.960 [2024-10-01 06:14:56.447611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:79480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.960 [2024-10-01 06:14:56.447626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:22:51.960 [2024-10-01 06:14:56.447661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:79488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.960 [2024-10-01 06:14:56.447688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:22:51.960 [2024-10-01 06:14:56.447707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:79496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.960 [2024-10-01 06:14:56.447721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:22:51.960 [2024-10-01 06:14:56.447740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:79504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.960 [2024-10-01 06:14:56.447753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:22:51.960 [2024-10-01 06:14:56.447817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:79368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.960 [2024-10-01 06:14:56.447837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.960 [2024-10-01 06:14:56.447860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:79376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.960 [2024-10-01 06:14:56.447890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.960 [2024-10-01 06:14:56.447944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:79896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.960 [2024-10-01 06:14:56.447965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.960 [2024-10-01 06:14:56.447982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:79904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.960 [2024-10-01 06:14:56.447996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.960 [2024-10-01 06:14:56.448012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:79912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.960 [2024-10-01 06:14:56.448026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.960 [2024-10-01 06:14:56.448042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:79920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.960 [2024-10-01 06:14:56.448056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.960 [2024-10-01 06:14:56.448072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:79928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.960 [2024-10-01 06:14:56.448086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.960 [2024-10-01 06:14:56.448101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:79936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.960 [2024-10-01 06:14:56.448115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.960 [2024-10-01 06:14:56.448131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:79944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.960 [2024-10-01 06:14:56.448146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.960 [2024-10-01 06:14:56.448162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:79952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.960 [2024-10-01 06:14:56.448176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.960 [2024-10-01 06:14:56.448192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:79512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.960 [2024-10-01 06:14:56.448206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.960 [2024-10-01 06:14:56.448231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:79520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.960 [2024-10-01 06:14:56.448260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.960 [2024-10-01 06:14:56.448290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:79528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.960 [2024-10-01 06:14:56.448317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.960 [2024-10-01 06:14:56.448331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:79536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.960 [2024-10-01 06:14:56.448344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.960 [2024-10-01 06:14:56.448366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:79544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.960 [2024-10-01 06:14:56.448380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.960 [2024-10-01 06:14:56.448409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:79552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.960 [2024-10-01 06:14:56.448438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.960 [2024-10-01 06:14:56.448452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:79560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.961 [2024-10-01 06:14:56.448465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.961 [2024-10-01 06:14:56.448479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:79568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.961 [2024-10-01 06:14:56.448492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.961 [2024-10-01 06:14:56.448506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:79960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.961 [2024-10-01 06:14:56.448518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.961 [2024-10-01 06:14:56.448532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:79968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.961 [2024-10-01 06:14:56.448561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.961 [2024-10-01 06:14:56.448576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:79976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.961 [2024-10-01 06:14:56.448606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.961 [2024-10-01 06:14:56.448637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:79984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.961 [2024-10-01 06:14:56.448681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.961 [2024-10-01 06:14:56.448697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:79992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.961 [2024-10-01 06:14:56.448711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.961 [2024-10-01 06:14:56.448727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:80000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.961 [2024-10-01 06:14:56.448741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.961 [2024-10-01 06:14:56.448757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:80008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.961 [2024-10-01 06:14:56.448771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.961 [2024-10-01 06:14:56.448787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:80016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.961 [2024-10-01 06:14:56.448801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.961 [2024-10-01 06:14:56.448816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:80024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.961 [2024-10-01 06:14:56.448836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.961 [2024-10-01 06:14:56.448852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:80032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.961 [2024-10-01 06:14:56.448867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.961 [2024-10-01 06:14:56.448882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:80040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.961 [2024-10-01 06:14:56.448896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.961 [2024-10-01 06:14:56.448912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:80048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.961 [2024-10-01 06:14:56.448926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.961 [2024-10-01 06:14:56.448941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:80056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.961 [2024-10-01 06:14:56.448956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.961 [2024-10-01 06:14:56.449000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:80064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.961 [2024-10-01 06:14:56.449028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.961 [2024-10-01 06:14:56.449042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:80072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.961 [2024-10-01 06:14:56.449055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.961 [2024-10-01 06:14:56.449069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:80080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.961 [2024-10-01 06:14:56.449082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.961 [2024-10-01 06:14:56.449109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:80088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.961 [2024-10-01 06:14:56.449123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.961 [2024-10-01 06:14:56.449138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:80096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.961 [2024-10-01 06:14:56.449165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.961 [2024-10-01 06:14:56.449180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:80104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.961 [2024-10-01 06:14:56.449192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.961 [2024-10-01 06:14:56.449206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:80112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.961 [2024-10-01 06:14:56.449219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.961 [2024-10-01 06:14:56.449233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:80120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.961 [2024-10-01 06:14:56.449246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.961 [2024-10-01 06:14:56.449266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:80128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.961 [2024-10-01 06:14:56.449280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.961 [2024-10-01 06:14:56.449295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:80136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.961 [2024-10-01 06:14:56.449308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.961 [2024-10-01 06:14:56.449322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:80144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.961 [2024-10-01 06:14:56.449334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.961 [2024-10-01 06:14:56.449349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:79576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.961 [2024-10-01 06:14:56.449361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.961 [2024-10-01 06:14:56.449375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:79584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.961 [2024-10-01 06:14:56.449388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.961 [2024-10-01 06:14:56.449402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:79592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.961 [2024-10-01 06:14:56.449415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.961 [2024-10-01 06:14:56.449429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:79600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.961 [2024-10-01 06:14:56.449441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.961 [2024-10-01 06:14:56.449455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:79608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.961 [2024-10-01 06:14:56.449467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.961 [2024-10-01 06:14:56.449481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:79616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.961 [2024-10-01 06:14:56.449494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.961 [2024-10-01 06:14:56.449507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:79624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.961 [2024-10-01 06:14:56.449520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.961 [2024-10-01 06:14:56.449534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:79632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.961 [2024-10-01 06:14:56.449546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.961 [2024-10-01 06:14:56.449561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:80152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.961 [2024-10-01 06:14:56.449573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.961 [2024-10-01 06:14:56.449587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:80160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.961 [2024-10-01 06:14:56.449605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.961 [2024-10-01 06:14:56.449625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:80168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.961 [2024-10-01 06:14:56.449639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.961 [2024-10-01 06:14:56.449653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:80176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.961 [2024-10-01 06:14:56.449666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.961 [2024-10-01 06:14:56.449679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:80184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.961 [2024-10-01 06:14:56.449692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.961 [2024-10-01 06:14:56.449706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:80192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.961 [2024-10-01 06:14:56.449719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.961 [2024-10-01 06:14:56.449733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:80200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.962 [2024-10-01 06:14:56.449746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.962 [2024-10-01 06:14:56.449760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:80208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.962 [2024-10-01 06:14:56.449772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.962 [2024-10-01 06:14:56.449786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:80216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.962 [2024-10-01 06:14:56.449799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.962 [2024-10-01 06:14:56.449812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:80224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.962 [2024-10-01 06:14:56.449825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.962 [2024-10-01 06:14:56.449838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:80232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.962 [2024-10-01 06:14:56.449851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.962 [2024-10-01 06:14:56.449865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:80240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.962 [2024-10-01 06:14:56.449878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.962 [2024-10-01 06:14:56.449891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:80248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.962 [2024-10-01 06:14:56.449904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.962 [2024-10-01 06:14:56.449947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:80256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.962 [2024-10-01 06:14:56.449966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.962 [2024-10-01 06:14:56.449981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:80264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.962 [2024-10-01 06:14:56.450001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.962 [2024-10-01 06:14:56.450016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:80272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:51.962 [2024-10-01 06:14:56.450029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.962 [2024-10-01 06:14:56.450044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:79640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.962 [2024-10-01 06:14:56.450056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.962 [2024-10-01 06:14:56.450071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:79648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.962 [2024-10-01 06:14:56.450084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.962 [2024-10-01 06:14:56.450101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:79656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.962 [2024-10-01 06:14:56.450114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.962 [2024-10-01 06:14:56.450128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:79664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.962 [2024-10-01 06:14:56.450141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.962 [2024-10-01 06:14:56.450155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:79672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.962 [2024-10-01 06:14:56.450168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.962 [2024-10-01 06:14:56.450183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:79680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.962 [2024-10-01 06:14:56.450196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.962 [2024-10-01 06:14:56.450210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:79688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.962 [2024-10-01 06:14:56.450223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.962 [2024-10-01 06:14:56.450237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:79696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.962 [2024-10-01 06:14:56.450250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.962 [2024-10-01 06:14:56.450264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:79704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.962 [2024-10-01 06:14:56.450277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.962 [2024-10-01 06:14:56.450291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:79712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.962 [2024-10-01 06:14:56.450319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.962 [2024-10-01 06:14:56.450333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:79720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.962 [2024-10-01 06:14:56.450345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.962 [2024-10-01 06:14:56.450365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:79728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.962 [2024-10-01 06:14:56.450378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.962 [2024-10-01 06:14:56.450392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:79736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.962 [2024-10-01 06:14:56.450404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.962 [2024-10-01 06:14:56.450418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:79744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.962 [2024-10-01 06:14:56.450433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.962 [2024-10-01 06:14:56.450447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:79752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.962 [2024-10-01 06:14:56.450460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.962 [2024-10-01 06:14:56.450474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:79760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.962 [2024-10-01 06:14:56.450487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.962 [2024-10-01 06:14:56.450501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:79768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.962 [2024-10-01 06:14:56.450513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.962 [2024-10-01 06:14:56.450527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:79776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.962 [2024-10-01 06:14:56.450539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.962 [2024-10-01 06:14:56.450555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:79784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.962 [2024-10-01 06:14:56.450568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.962 [2024-10-01 06:14:56.450581] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fc860 is same with the state(6) to be set 00:22:51.962 [2024-10-01 06:14:56.450597] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:51.962 [2024-10-01 06:14:56.450607] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:51.962 [2024-10-01 06:14:56.450617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79792 len:8 PRP1 0x0 PRP2 0x0 00:22:51.962 [2024-10-01 06:14:56.450629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.962 [2024-10-01 06:14:56.450643] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:51.962 [2024-10-01 06:14:56.450653] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:51.962 [2024-10-01 06:14:56.450662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79800 len:8 PRP1 0x0 PRP2 0x0 00:22:51.962 [2024-10-01 06:14:56.450674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.962 [2024-10-01 06:14:56.450686] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:51.962 [2024-10-01 06:14:56.450695] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:51.962 [2024-10-01 06:14:56.450710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79808 len:8 PRP1 0x0 PRP2 0x0 00:22:51.962 [2024-10-01 06:14:56.450722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.962 [2024-10-01 06:14:56.450735] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:51.962 [2024-10-01 06:14:56.450744] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:51.962 [2024-10-01 06:14:56.450754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79816 len:8 PRP1 0x0 PRP2 0x0 00:22:51.963 [2024-10-01 06:14:56.450765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.963 [2024-10-01 06:14:56.450777] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:51.963 [2024-10-01 06:14:56.450786] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:51.963 [2024-10-01 06:14:56.450795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79824 len:8 PRP1 0x0 PRP2 0x0 00:22:51.963 [2024-10-01 06:14:56.450807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.963 [2024-10-01 06:14:56.450821] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:51.963 [2024-10-01 06:14:56.450831] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:51.963 [2024-10-01 06:14:56.450841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80280 len:8 PRP1 0x0 PRP2 0x0 00:22:51.963 [2024-10-01 06:14:56.450852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.963 [2024-10-01 06:14:56.450865] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:51.963 [2024-10-01 06:14:56.450874] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:51.963 [2024-10-01 06:14:56.450883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80288 len:8 PRP1 0x0 PRP2 0x0 00:22:51.963 [2024-10-01 06:14:56.450895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.963 [2024-10-01 06:14:56.450907] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:51.963 [2024-10-01 06:14:56.450933] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:51.963 [2024-10-01 06:14:56.450945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80296 len:8 PRP1 0x0 PRP2 0x0 00:22:51.963 [2024-10-01 06:14:56.450957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.963 [2024-10-01 06:14:56.450969] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:51.963 [2024-10-01 06:14:56.450979] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:51.963 [2024-10-01 06:14:56.450988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80304 len:8 PRP1 0x0 PRP2 0x0 00:22:51.963 [2024-10-01 06:14:56.451000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.963 [2024-10-01 06:14:56.451012] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:51.963 [2024-10-01 06:14:56.451021] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:51.963 [2024-10-01 06:14:56.451030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80312 len:8 PRP1 0x0 PRP2 0x0 00:22:51.963 [2024-10-01 06:14:56.451042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.963 [2024-10-01 06:14:56.451054] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:51.963 [2024-10-01 06:14:56.451069] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:51.963 [2024-10-01 06:14:56.451080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80320 len:8 PRP1 0x0 PRP2 0x0 00:22:51.963 [2024-10-01 06:14:56.451091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.963 [2024-10-01 06:14:56.451104] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:51.963 [2024-10-01 06:14:56.451113] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:51.963 [2024-10-01 06:14:56.451122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80328 len:8 PRP1 0x0 PRP2 0x0 00:22:51.963 [2024-10-01 06:14:56.451134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.963 [2024-10-01 06:14:56.451146] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:51.963 [2024-10-01 06:14:56.451155] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:51.963 [2024-10-01 06:14:56.451164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80336 len:8 PRP1 0x0 PRP2 0x0 00:22:51.963 [2024-10-01 06:14:56.451176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.963 [2024-10-01 06:14:56.451190] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:51.963 [2024-10-01 06:14:56.451200] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:51.963 [2024-10-01 06:14:56.451209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80344 len:8 PRP1 0x0 PRP2 0x0 00:22:51.963 [2024-10-01 06:14:56.451221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.963 [2024-10-01 06:14:56.451233] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:51.963 [2024-10-01 06:14:56.451242] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:51.963 [2024-10-01 06:14:56.451251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80352 len:8 PRP1 0x0 PRP2 0x0 00:22:51.963 [2024-10-01 06:14:56.451263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.963 [2024-10-01 06:14:56.451275] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:51.963 [2024-10-01 06:14:56.451286] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:51.963 [2024-10-01 06:14:56.451296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80360 len:8 PRP1 0x0 PRP2 0x0 00:22:51.963 [2024-10-01 06:14:56.451307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.963 [2024-10-01 06:14:56.451319] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:51.963 [2024-10-01 06:14:56.451329] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:51.963 [2024-10-01 06:14:56.451339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80368 len:8 PRP1 0x0 PRP2 0x0 00:22:51.963 [2024-10-01 06:14:56.451350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.963 [2024-10-01 06:14:56.451362] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:51.963 [2024-10-01 06:14:56.451372] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:51.963 [2024-10-01 06:14:56.451381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80376 len:8 PRP1 0x0 PRP2 0x0 00:22:51.963 [2024-10-01 06:14:56.451393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.963 [2024-10-01 06:14:56.451410] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:51.963 [2024-10-01 06:14:56.451420] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:51.963 [2024-10-01 06:14:56.451429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80384 len:8 PRP1 0x0 PRP2 0x0 00:22:51.963 [2024-10-01 06:14:56.451441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.963 [2024-10-01 06:14:56.451482] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x7fc860 was disconnected and freed. reset controller. 00:22:51.963 [2024-10-01 06:14:56.451590] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:51.963 [2024-10-01 06:14:56.451615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.963 [2024-10-01 06:14:56.451630] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:51.963 [2024-10-01 06:14:56.451642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.963 [2024-10-01 06:14:56.463653] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:51.963 [2024-10-01 06:14:56.463698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.963 [2024-10-01 06:14:56.463719] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:51.963 [2024-10-01 06:14:56.463737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.963 [2024-10-01 06:14:56.463756] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:0014000c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:51.963 [2024-10-01 06:14:56.463773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.963 [2024-10-01 06:14:56.463798] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b94a0 is same with the state(6) to be set 00:22:51.963 [2024-10-01 06:14:56.465322] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:51.963 [2024-10-01 06:14:56.465373] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7b94a0 (9): Bad file descriptor 00:22:51.963 [2024-10-01 06:14:56.465874] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:22:51.963 [2024-10-01 06:14:56.465938] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b94a0 with addr=10.0.0.3, port=4421 00:22:51.963 [2024-10-01 06:14:56.465962] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b94a0 is same with the state(6) to be set 00:22:51.963 [2024-10-01 06:14:56.466038] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7b94a0 (9): Bad file descriptor 00:22:51.963 [2024-10-01 06:14:56.466085] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:51.963 [2024-10-01 06:14:56.466106] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:51.963 [2024-10-01 06:14:56.466125] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:51.963 [2024-10-01 06:14:56.466164] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:51.963 [2024-10-01 06:14:56.466184] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:51.963 7727.22 IOPS, 30.18 MiB/s 7774.81 IOPS, 30.37 MiB/s 7829.16 IOPS, 30.58 MiB/s 7879.28 IOPS, 30.78 MiB/s 7923.00 IOPS, 30.95 MiB/s 7958.15 IOPS, 31.09 MiB/s 7983.90 IOPS, 31.19 MiB/s 8010.51 IOPS, 31.29 MiB/s 8045.55 IOPS, 31.43 MiB/s 8080.44 IOPS, 31.56 MiB/s [2024-10-01 06:15:06.529030] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:51.963 8116.96 IOPS, 31.71 MiB/s 8150.72 IOPS, 31.84 MiB/s 8183.92 IOPS, 31.97 MiB/s 8214.78 IOPS, 32.09 MiB/s 8236.24 IOPS, 32.17 MiB/s 8261.25 IOPS, 32.27 MiB/s 8288.08 IOPS, 32.38 MiB/s 8311.17 IOPS, 32.47 MiB/s 8332.67 IOPS, 32.55 MiB/s 8354.55 IOPS, 32.63 MiB/s Received shutdown signal, test time was about 55.545676 seconds 00:22:51.963 00:22:51.963 Latency(us) 00:22:51.963 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:51.963 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:51.963 Verification LBA range: start 0x0 length 0x4000 00:22:51.964 Nvme0n1 : 55.54 8361.16 32.66 0.00 0.00 15283.82 811.75 7046430.72 00:22:51.964 =================================================================================================================== 00:22:51.964 Total : 8361.16 32.66 0.00 0.00 15283.82 811.75 7046430.72 00:22:51.964 [2024-10-01 06:15:16.616052] app.c:1032:log_deprecation_hits: *WARNING*: multipath_config: deprecation 'bdev_nvme_attach_controller.multipath configuration mismatch' scheduled for removal in v25.01 hit 1 times 00:22:51.964 06:15:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:51.964 06:15:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@122 -- # trap - SIGINT SIGTERM EXIT 00:22:51.964 06:15:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@124 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:22:51.964 06:15:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@125 -- # nvmftestfini 00:22:51.964 06:15:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@512 -- # nvmfcleanup 00:22:51.964 06:15:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@121 -- # sync 00:22:51.964 06:15:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:51.964 06:15:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@124 -- # set +e 00:22:51.964 06:15:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:51.964 06:15:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:51.964 rmmod nvme_tcp 00:22:51.964 rmmod nvme_fabrics 00:22:51.964 rmmod nvme_keyring 00:22:51.964 06:15:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:51.964 06:15:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@128 -- # set -e 00:22:51.964 06:15:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@129 -- # return 0 00:22:51.964 06:15:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@513 -- # '[' -n 94983 ']' 00:22:51.964 06:15:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@514 -- # killprocess 94983 00:22:51.964 06:15:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@950 -- # '[' -z 94983 ']' 00:22:51.964 06:15:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@954 -- # kill -0 94983 00:22:51.964 06:15:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@955 -- # uname 00:22:51.964 06:15:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:51.964 06:15:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 94983 00:22:51.964 06:15:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:51.964 06:15:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:51.964 killing process with pid 94983 00:22:51.964 06:15:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@968 -- # echo 'killing process with pid 94983' 00:22:51.964 06:15:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@969 -- # kill 94983 00:22:51.964 06:15:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@974 -- # wait 94983 00:22:51.964 06:15:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:22:51.964 06:15:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:22:51.964 06:15:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:22:51.964 06:15:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@297 -- # iptr 00:22:51.964 06:15:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@787 -- # iptables-save 00:22:51.964 06:15:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@787 -- # iptables-restore 00:22:51.964 06:15:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:22:51.964 06:15:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:51.964 06:15:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:22:51.964 06:15:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:22:51.964 06:15:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:22:51.964 06:15:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:22:51.964 06:15:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:22:51.964 06:15:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:22:51.964 06:15:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:22:51.964 06:15:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:22:51.964 06:15:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:22:51.964 06:15:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:22:51.964 06:15:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:22:51.964 06:15:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:22:51.964 06:15:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:51.964 06:15:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:52.223 06:15:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@246 -- # remove_spdk_ns 00:22:52.223 06:15:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:52.223 06:15:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:52.223 06:15:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:52.223 06:15:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@300 -- # return 0 00:22:52.223 00:22:52.223 real 1m0.980s 00:22:52.223 user 2m49.395s 00:22:52.223 sys 0m17.727s 00:22:52.223 06:15:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:52.223 06:15:17 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:22:52.223 ************************************ 00:22:52.223 END TEST nvmf_host_multipath 00:22:52.223 ************************************ 00:22:52.223 06:15:17 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@43 -- # run_test nvmf_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:22:52.223 06:15:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:22:52.223 06:15:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:52.223 06:15:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:52.224 ************************************ 00:22:52.224 START TEST nvmf_timeout 00:22:52.224 ************************************ 00:22:52.224 06:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:22:52.224 * Looking for test storage... 00:22:52.224 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:22:52.224 06:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:22:52.224 06:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1681 -- # lcov --version 00:22:52.224 06:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:22:52.224 06:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:22:52.224 06:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:52.224 06:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:52.224 06:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:52.224 06:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@336 -- # IFS=.-: 00:22:52.224 06:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@336 -- # read -ra ver1 00:22:52.224 06:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@337 -- # IFS=.-: 00:22:52.224 06:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@337 -- # read -ra ver2 00:22:52.224 06:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@338 -- # local 'op=<' 00:22:52.224 06:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@340 -- # ver1_l=2 00:22:52.224 06:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@341 -- # ver2_l=1 00:22:52.224 06:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:52.224 06:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@344 -- # case "$op" in 00:22:52.224 06:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@345 -- # : 1 00:22:52.224 06:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:52.224 06:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:52.224 06:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@365 -- # decimal 1 00:22:52.224 06:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@353 -- # local d=1 00:22:52.224 06:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:52.224 06:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@355 -- # echo 1 00:22:52.224 06:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@365 -- # ver1[v]=1 00:22:52.224 06:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@366 -- # decimal 2 00:22:52.224 06:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@353 -- # local d=2 00:22:52.224 06:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:52.224 06:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@355 -- # echo 2 00:22:52.224 06:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@366 -- # ver2[v]=2 00:22:52.224 06:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:52.224 06:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:52.224 06:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@368 -- # return 0 00:22:52.224 06:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:52.224 06:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:22:52.224 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:52.224 --rc genhtml_branch_coverage=1 00:22:52.224 --rc genhtml_function_coverage=1 00:22:52.224 --rc genhtml_legend=1 00:22:52.224 --rc geninfo_all_blocks=1 00:22:52.224 --rc geninfo_unexecuted_blocks=1 00:22:52.224 00:22:52.224 ' 00:22:52.224 06:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:22:52.224 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:52.224 --rc genhtml_branch_coverage=1 00:22:52.224 --rc genhtml_function_coverage=1 00:22:52.224 --rc genhtml_legend=1 00:22:52.224 --rc geninfo_all_blocks=1 00:22:52.224 --rc geninfo_unexecuted_blocks=1 00:22:52.224 00:22:52.224 ' 00:22:52.224 06:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:22:52.224 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:52.224 --rc genhtml_branch_coverage=1 00:22:52.224 --rc genhtml_function_coverage=1 00:22:52.224 --rc genhtml_legend=1 00:22:52.224 --rc geninfo_all_blocks=1 00:22:52.224 --rc geninfo_unexecuted_blocks=1 00:22:52.224 00:22:52.224 ' 00:22:52.224 06:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:22:52.224 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:52.224 --rc genhtml_branch_coverage=1 00:22:52.224 --rc genhtml_function_coverage=1 00:22:52.224 --rc genhtml_legend=1 00:22:52.224 --rc geninfo_all_blocks=1 00:22:52.224 --rc geninfo_unexecuted_blocks=1 00:22:52.224 00:22:52.224 ' 00:22:52.224 06:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:52.224 06:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@7 -- # uname -s 00:22:52.224 06:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:52.224 06:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:52.224 06:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:52.224 06:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:52.224 06:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:52.224 06:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:52.224 06:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:52.224 06:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:52.224 06:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:52.224 06:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:52.224 06:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 00:22:52.224 06:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=a979a798-a221-4879-b3c4-5aaa753fde06 00:22:52.224 06:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:52.224 06:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:52.224 06:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:52.224 06:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:52.224 06:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:52.224 06:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@15 -- # shopt -s extglob 00:22:52.224 06:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:52.224 06:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:52.224 06:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:52.224 06:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:52.225 06:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:52.225 06:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:52.225 06:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@5 -- # export PATH 00:22:52.225 06:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:52.225 06:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@51 -- # : 0 00:22:52.225 06:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:52.225 06:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:52.225 06:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:52.225 06:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:52.225 06:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:52.225 06:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:52.225 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:52.225 06:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:52.225 06:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:52.225 06:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:52.225 06:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:52.225 06:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:52.225 06:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:52.225 06:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:22:52.225 06:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:52.225 06:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@19 -- # nvmftestinit 00:22:52.225 06:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:22:52.225 06:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:52.484 06:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@472 -- # prepare_net_devs 00:22:52.484 06:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@434 -- # local -g is_hw=no 00:22:52.484 06:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@436 -- # remove_spdk_ns 00:22:52.484 06:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:52.484 06:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:52.484 06:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:52.484 06:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:22:52.484 06:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:22:52.484 06:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:22:52.484 06:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:22:52.484 06:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:22:52.484 06:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@456 -- # nvmf_veth_init 00:22:52.484 06:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:52.484 06:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:22:52.484 06:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:22:52.484 06:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:22:52.484 06:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:52.484 06:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:22:52.484 06:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:52.484 06:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:22:52.484 06:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:52.484 06:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:22:52.484 06:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:52.484 06:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:52.485 06:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:52.485 06:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:52.485 06:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:52.485 06:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:52.485 06:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:22:52.485 Cannot find device "nvmf_init_br" 00:22:52.485 06:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@162 -- # true 00:22:52.485 06:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:22:52.485 Cannot find device "nvmf_init_br2" 00:22:52.485 06:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@163 -- # true 00:22:52.485 06:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:22:52.485 Cannot find device "nvmf_tgt_br" 00:22:52.485 06:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@164 -- # true 00:22:52.485 06:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:22:52.485 Cannot find device "nvmf_tgt_br2" 00:22:52.485 06:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@165 -- # true 00:22:52.485 06:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:22:52.485 Cannot find device "nvmf_init_br" 00:22:52.485 06:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@166 -- # true 00:22:52.485 06:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:22:52.485 Cannot find device "nvmf_init_br2" 00:22:52.485 06:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@167 -- # true 00:22:52.485 06:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:22:52.485 Cannot find device "nvmf_tgt_br" 00:22:52.485 06:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@168 -- # true 00:22:52.485 06:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:22:52.485 Cannot find device "nvmf_tgt_br2" 00:22:52.485 06:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@169 -- # true 00:22:52.485 06:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:22:52.485 Cannot find device "nvmf_br" 00:22:52.485 06:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@170 -- # true 00:22:52.485 06:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:22:52.485 Cannot find device "nvmf_init_if" 00:22:52.485 06:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@171 -- # true 00:22:52.485 06:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:22:52.485 Cannot find device "nvmf_init_if2" 00:22:52.485 06:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@172 -- # true 00:22:52.485 06:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:52.485 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:52.485 06:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@173 -- # true 00:22:52.485 06:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:52.485 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:52.485 06:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@174 -- # true 00:22:52.485 06:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:22:52.485 06:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:52.485 06:15:17 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:22:52.485 06:15:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:52.485 06:15:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:52.485 06:15:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:52.485 06:15:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:52.485 06:15:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:52.485 06:15:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:22:52.485 06:15:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:22:52.485 06:15:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:22:52.485 06:15:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:22:52.485 06:15:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:22:52.485 06:15:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:22:52.485 06:15:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:22:52.485 06:15:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:22:52.485 06:15:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:22:52.485 06:15:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:52.485 06:15:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:52.485 06:15:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:52.747 06:15:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:22:52.747 06:15:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:22:52.747 06:15:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:22:52.747 06:15:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:22:52.747 06:15:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:52.747 06:15:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:52.747 06:15:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:52.747 06:15:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:22:52.747 06:15:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:22:52.747 06:15:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:22:52.747 06:15:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:52.747 06:15:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:22:52.747 06:15:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:22:52.747 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:52.747 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.072 ms 00:22:52.747 00:22:52.747 --- 10.0.0.3 ping statistics --- 00:22:52.747 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:52.747 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:22:52.747 06:15:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:22:52.747 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:22:52.747 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.073 ms 00:22:52.747 00:22:52.747 --- 10.0.0.4 ping statistics --- 00:22:52.747 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:52.747 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:22:52.747 06:15:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:52.747 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:52.747 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:22:52.747 00:22:52.747 --- 10.0.0.1 ping statistics --- 00:22:52.747 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:52.747 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:22:52.747 06:15:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:22:52.747 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:52.747 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.047 ms 00:22:52.747 00:22:52.747 --- 10.0.0.2 ping statistics --- 00:22:52.747 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:52.747 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:22:52.747 06:15:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:52.747 06:15:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@457 -- # return 0 00:22:52.747 06:15:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@474 -- # '[' '' == iso ']' 00:22:52.747 06:15:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:52.747 06:15:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:22:52.747 06:15:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:22:52.747 06:15:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:52.747 06:15:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:22:52.747 06:15:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:22:52.747 06:15:18 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@21 -- # nvmfappstart -m 0x3 00:22:52.747 06:15:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:22:52.747 06:15:18 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:52.747 06:15:18 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:22:52.747 06:15:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@505 -- # nvmfpid=96196 00:22:52.747 06:15:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@506 -- # waitforlisten 96196 00:22:52.747 06:15:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:22:52.747 06:15:18 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@831 -- # '[' -z 96196 ']' 00:22:52.747 06:15:18 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:52.747 06:15:18 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:52.747 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:52.747 06:15:18 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:52.747 06:15:18 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:52.747 06:15:18 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:22:52.747 [2024-10-01 06:15:18.284003] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:22:52.748 [2024-10-01 06:15:18.284107] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:53.008 [2024-10-01 06:15:18.418839] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:22:53.008 [2024-10-01 06:15:18.451584] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:53.008 [2024-10-01 06:15:18.451653] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:53.008 [2024-10-01 06:15:18.451682] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:53.008 [2024-10-01 06:15:18.451695] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:53.008 [2024-10-01 06:15:18.451703] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:53.008 [2024-10-01 06:15:18.452063] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:22:53.008 [2024-10-01 06:15:18.452087] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:22:53.008 [2024-10-01 06:15:18.480920] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:22:53.008 06:15:18 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:53.008 06:15:18 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # return 0 00:22:53.008 06:15:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:22:53.008 06:15:18 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:53.008 06:15:18 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:22:53.008 06:15:18 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:53.008 06:15:18 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@23 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid || :; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:53.008 06:15:18 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:22:53.267 [2024-10-01 06:15:18.866720] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:53.526 06:15:18 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:22:53.784 Malloc0 00:22:53.784 06:15:19 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:54.043 06:15:19 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:54.300 06:15:19 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:22:54.558 [2024-10-01 06:15:19.949577] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:22:54.558 06:15:19 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:22:54.558 06:15:19 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@32 -- # bdevperf_pid=96239 00:22:54.558 06:15:19 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@34 -- # waitforlisten 96239 /var/tmp/bdevperf.sock 00:22:54.558 06:15:19 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@831 -- # '[' -z 96239 ']' 00:22:54.558 06:15:19 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:54.558 06:15:19 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:54.558 06:15:19 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:54.558 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:54.558 06:15:19 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:54.558 06:15:19 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:22:54.558 [2024-10-01 06:15:20.006022] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:22:54.558 [2024-10-01 06:15:20.006107] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96239 ] 00:22:54.558 [2024-10-01 06:15:20.143211] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:54.816 [2024-10-01 06:15:20.186358] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:22:54.816 [2024-10-01 06:15:20.220809] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:22:54.816 06:15:20 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:54.816 06:15:20 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # return 0 00:22:54.816 06:15:20 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:22:55.074 06:15:20 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:22:55.333 NVMe0n1 00:22:55.333 06:15:20 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@50 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:55.333 06:15:20 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@51 -- # rpc_pid=96255 00:22:55.333 06:15:20 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@53 -- # sleep 1 00:22:55.592 Running I/O for 10 seconds... 00:22:56.527 06:15:21 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:22:56.786 7445.00 IOPS, 29.08 MiB/s [2024-10-01 06:15:22.147461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:65248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.786 [2024-10-01 06:15:22.147526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.786 [2024-10-01 06:15:22.147566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:65256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.786 [2024-10-01 06:15:22.147577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.786 [2024-10-01 06:15:22.147587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:65264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.786 [2024-10-01 06:15:22.147596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.786 [2024-10-01 06:15:22.147607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:65272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.786 [2024-10-01 06:15:22.147616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.786 [2024-10-01 06:15:22.147627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:65280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.786 [2024-10-01 06:15:22.147635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.786 [2024-10-01 06:15:22.147645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:65288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.786 [2024-10-01 06:15:22.147654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.786 [2024-10-01 06:15:22.147665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:65296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.786 [2024-10-01 06:15:22.147673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.786 [2024-10-01 06:15:22.147684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:65304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.786 [2024-10-01 06:15:22.147692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.786 [2024-10-01 06:15:22.147702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:65312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.786 [2024-10-01 06:15:22.147711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.786 [2024-10-01 06:15:22.147721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:65320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.786 [2024-10-01 06:15:22.147730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.786 [2024-10-01 06:15:22.147740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:65328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.786 [2024-10-01 06:15:22.147749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.786 [2024-10-01 06:15:22.147775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:65336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.786 [2024-10-01 06:15:22.147799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.786 [2024-10-01 06:15:22.147810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:65344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.787 [2024-10-01 06:15:22.147819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.787 [2024-10-01 06:15:22.147830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:65352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.787 [2024-10-01 06:15:22.147839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.787 [2024-10-01 06:15:22.147850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:65360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.787 [2024-10-01 06:15:22.147860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.787 [2024-10-01 06:15:22.147871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:65368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.787 [2024-10-01 06:15:22.147880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.787 [2024-10-01 06:15:22.147893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:65376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.787 [2024-10-01 06:15:22.147902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.787 [2024-10-01 06:15:22.147935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:65384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.787 [2024-10-01 06:15:22.147948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.787 [2024-10-01 06:15:22.147959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:65392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.787 [2024-10-01 06:15:22.147969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.787 [2024-10-01 06:15:22.147980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:65400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.787 [2024-10-01 06:15:22.147989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.787 [2024-10-01 06:15:22.148000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:65408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.787 [2024-10-01 06:15:22.148008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.787 [2024-10-01 06:15:22.148020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:65416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.787 [2024-10-01 06:15:22.148029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.787 [2024-10-01 06:15:22.148040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:65424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.787 [2024-10-01 06:15:22.148049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.787 [2024-10-01 06:15:22.148059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:65432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.787 [2024-10-01 06:15:22.148069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.787 [2024-10-01 06:15:22.148080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:65440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.787 [2024-10-01 06:15:22.148088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.787 [2024-10-01 06:15:22.148099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:65448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.787 [2024-10-01 06:15:22.148109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.787 [2024-10-01 06:15:22.148120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:65456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.787 [2024-10-01 06:15:22.148128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.787 [2024-10-01 06:15:22.148140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:65464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.787 [2024-10-01 06:15:22.148149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.787 [2024-10-01 06:15:22.148160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:65472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.787 [2024-10-01 06:15:22.148168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.787 [2024-10-01 06:15:22.148180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:65480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.787 [2024-10-01 06:15:22.148188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.787 [2024-10-01 06:15:22.148199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:65488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.787 [2024-10-01 06:15:22.148208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.787 [2024-10-01 06:15:22.148219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:65496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.787 [2024-10-01 06:15:22.148229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.787 [2024-10-01 06:15:22.148241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:65504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.787 [2024-10-01 06:15:22.148250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.787 [2024-10-01 06:15:22.148261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:65512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.787 [2024-10-01 06:15:22.148270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.787 [2024-10-01 06:15:22.148281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.787 [2024-10-01 06:15:22.148290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.787 [2024-10-01 06:15:22.148301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:65528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.787 [2024-10-01 06:15:22.148310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.787 [2024-10-01 06:15:22.148321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:65536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.787 [2024-10-01 06:15:22.148331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.787 [2024-10-01 06:15:22.148342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:65544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.787 [2024-10-01 06:15:22.148351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.787 [2024-10-01 06:15:22.148362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:65552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.787 [2024-10-01 06:15:22.148371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.787 [2024-10-01 06:15:22.148382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:65560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.787 [2024-10-01 06:15:22.148391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.787 [2024-10-01 06:15:22.148402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:65568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.787 [2024-10-01 06:15:22.148411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.787 [2024-10-01 06:15:22.148421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:65576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.787 [2024-10-01 06:15:22.148431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.787 [2024-10-01 06:15:22.148442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:65584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.787 [2024-10-01 06:15:22.148451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.787 [2024-10-01 06:15:22.148461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:65592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.787 [2024-10-01 06:15:22.148470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.787 [2024-10-01 06:15:22.148481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:65600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.787 [2024-10-01 06:15:22.148492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.787 [2024-10-01 06:15:22.148503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:65608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.787 [2024-10-01 06:15:22.148511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.787 [2024-10-01 06:15:22.148522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:65616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.787 [2024-10-01 06:15:22.148531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.787 [2024-10-01 06:15:22.148543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:65624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.787 [2024-10-01 06:15:22.148552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.787 [2024-10-01 06:15:22.148565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:65632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.787 [2024-10-01 06:15:22.148574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.787 [2024-10-01 06:15:22.148585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:65640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.787 [2024-10-01 06:15:22.148595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.787 [2024-10-01 06:15:22.148607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:65648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.787 [2024-10-01 06:15:22.148616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.787 [2024-10-01 06:15:22.148627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:65656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.787 [2024-10-01 06:15:22.148636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.788 [2024-10-01 06:15:22.148646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:65664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.788 [2024-10-01 06:15:22.148656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.788 [2024-10-01 06:15:22.148666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:65672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.788 [2024-10-01 06:15:22.148675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.788 [2024-10-01 06:15:22.148686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:65680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.788 [2024-10-01 06:15:22.148695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.788 [2024-10-01 06:15:22.148706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:64688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.788 [2024-10-01 06:15:22.148715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.788 [2024-10-01 06:15:22.148726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:64696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.788 [2024-10-01 06:15:22.148735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.788 [2024-10-01 06:15:22.148746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:64704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.788 [2024-10-01 06:15:22.148755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.788 [2024-10-01 06:15:22.148766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:64712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.788 [2024-10-01 06:15:22.148775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.788 [2024-10-01 06:15:22.148786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:64720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.788 [2024-10-01 06:15:22.148795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.788 [2024-10-01 06:15:22.148806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:64728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.788 [2024-10-01 06:15:22.148815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.788 [2024-10-01 06:15:22.148826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:64736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.788 [2024-10-01 06:15:22.148835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.788 [2024-10-01 06:15:22.148846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:64744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.788 [2024-10-01 06:15:22.148855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.788 [2024-10-01 06:15:22.148866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:64752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.788 [2024-10-01 06:15:22.148875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.788 [2024-10-01 06:15:22.148886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:64760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.788 [2024-10-01 06:15:22.148905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.788 [2024-10-01 06:15:22.148917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:64768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.788 [2024-10-01 06:15:22.148928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.788 [2024-10-01 06:15:22.148939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:64776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.788 [2024-10-01 06:15:22.148948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.788 [2024-10-01 06:15:22.148959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:64784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.788 [2024-10-01 06:15:22.148968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.788 [2024-10-01 06:15:22.148980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:64792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.788 [2024-10-01 06:15:22.148989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.788 [2024-10-01 06:15:22.149000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:64800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.788 [2024-10-01 06:15:22.149009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.788 [2024-10-01 06:15:22.149021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:65688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.788 [2024-10-01 06:15:22.149030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.788 [2024-10-01 06:15:22.149041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:65696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.788 [2024-10-01 06:15:22.149050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.788 [2024-10-01 06:15:22.149061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:64808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.788 [2024-10-01 06:15:22.149070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.788 [2024-10-01 06:15:22.149081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:64816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.788 [2024-10-01 06:15:22.149090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.788 [2024-10-01 06:15:22.149101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:64824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.788 [2024-10-01 06:15:22.149110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.788 [2024-10-01 06:15:22.149122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:64832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.788 [2024-10-01 06:15:22.149131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.788 [2024-10-01 06:15:22.149142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:64840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.788 [2024-10-01 06:15:22.149151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.788 [2024-10-01 06:15:22.149162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:64848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.788 [2024-10-01 06:15:22.149172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.788 [2024-10-01 06:15:22.149183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:64856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.788 [2024-10-01 06:15:22.149192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.788 [2024-10-01 06:15:22.149203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:65704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:56.788 [2024-10-01 06:15:22.149212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.788 [2024-10-01 06:15:22.149224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:64864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.788 [2024-10-01 06:15:22.149232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.788 [2024-10-01 06:15:22.149244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:64872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.788 [2024-10-01 06:15:22.149254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.788 [2024-10-01 06:15:22.149265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:64880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.788 [2024-10-01 06:15:22.149274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.788 [2024-10-01 06:15:22.149285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:64888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.788 [2024-10-01 06:15:22.149294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.788 [2024-10-01 06:15:22.149305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:64896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.788 [2024-10-01 06:15:22.149314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.788 [2024-10-01 06:15:22.149325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:64904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.788 [2024-10-01 06:15:22.149334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.788 [2024-10-01 06:15:22.149345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:64912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.788 [2024-10-01 06:15:22.149354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.788 [2024-10-01 06:15:22.149365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:64920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.788 [2024-10-01 06:15:22.149374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.788 [2024-10-01 06:15:22.149385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:64928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.788 [2024-10-01 06:15:22.149394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.788 [2024-10-01 06:15:22.149405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:64936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.788 [2024-10-01 06:15:22.149416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.788 [2024-10-01 06:15:22.149426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:64944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.788 [2024-10-01 06:15:22.149435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.788 [2024-10-01 06:15:22.149446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:64952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.789 [2024-10-01 06:15:22.149461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.789 [2024-10-01 06:15:22.149472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:64960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.789 [2024-10-01 06:15:22.149481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.789 [2024-10-01 06:15:22.149492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:64968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.789 [2024-10-01 06:15:22.149502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.789 [2024-10-01 06:15:22.149513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:64976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.789 [2024-10-01 06:15:22.149522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.789 [2024-10-01 06:15:22.149533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:64984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.789 [2024-10-01 06:15:22.149542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.789 [2024-10-01 06:15:22.149553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:64992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.789 [2024-10-01 06:15:22.149562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.789 [2024-10-01 06:15:22.149574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:65000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.789 [2024-10-01 06:15:22.149583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.789 [2024-10-01 06:15:22.149594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:65008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.789 [2024-10-01 06:15:22.149603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.789 [2024-10-01 06:15:22.149614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:65016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.789 [2024-10-01 06:15:22.149623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.789 [2024-10-01 06:15:22.149634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:65024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.789 [2024-10-01 06:15:22.149643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.789 [2024-10-01 06:15:22.149654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:65032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.789 [2024-10-01 06:15:22.149663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.789 [2024-10-01 06:15:22.149674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:65040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.789 [2024-10-01 06:15:22.149683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.789 [2024-10-01 06:15:22.149695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:65048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.789 [2024-10-01 06:15:22.149704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.789 [2024-10-01 06:15:22.149715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:65056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.789 [2024-10-01 06:15:22.149724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.789 [2024-10-01 06:15:22.149734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:65064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.789 [2024-10-01 06:15:22.149743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.789 [2024-10-01 06:15:22.149754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:65072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.789 [2024-10-01 06:15:22.149763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.789 [2024-10-01 06:15:22.149774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:65080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.789 [2024-10-01 06:15:22.149785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.789 [2024-10-01 06:15:22.149796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:65088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.789 [2024-10-01 06:15:22.149805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.789 [2024-10-01 06:15:22.149816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:65096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.789 [2024-10-01 06:15:22.149825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.789 [2024-10-01 06:15:22.149836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:65104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.789 [2024-10-01 06:15:22.149845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.789 [2024-10-01 06:15:22.149857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:65112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.789 [2024-10-01 06:15:22.149866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.789 [2024-10-01 06:15:22.149877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:65120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.789 [2024-10-01 06:15:22.149886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.789 [2024-10-01 06:15:22.149911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:65128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.789 [2024-10-01 06:15:22.149922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.789 [2024-10-01 06:15:22.149933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:65136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.789 [2024-10-01 06:15:22.149942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.789 [2024-10-01 06:15:22.149953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:65144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.789 [2024-10-01 06:15:22.149962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.789 [2024-10-01 06:15:22.149973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:65152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.789 [2024-10-01 06:15:22.149982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.789 [2024-10-01 06:15:22.149993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:65160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.789 [2024-10-01 06:15:22.150002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.789 [2024-10-01 06:15:22.150013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:65168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.789 [2024-10-01 06:15:22.150022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.789 [2024-10-01 06:15:22.150033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:65176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.789 [2024-10-01 06:15:22.150042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.789 [2024-10-01 06:15:22.150053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:65184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.789 [2024-10-01 06:15:22.150062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.789 [2024-10-01 06:15:22.150073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:65192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.789 [2024-10-01 06:15:22.150082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.789 [2024-10-01 06:15:22.150093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:65200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.789 [2024-10-01 06:15:22.150102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.789 [2024-10-01 06:15:22.150113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:65208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.789 [2024-10-01 06:15:22.150124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.789 [2024-10-01 06:15:22.150136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:65216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.789 [2024-10-01 06:15:22.150145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.789 [2024-10-01 06:15:22.150156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:65224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.789 [2024-10-01 06:15:22.150165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.789 [2024-10-01 06:15:22.150176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:65232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:56.789 [2024-10-01 06:15:22.150185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.789 [2024-10-01 06:15:22.150195] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca3670 is same with the state(6) to be set 00:22:56.789 [2024-10-01 06:15:22.150207] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:56.789 [2024-10-01 06:15:22.150215] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:56.789 [2024-10-01 06:15:22.150223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:65240 len:8 PRP1 0x0 PRP2 0x0 00:22:56.789 [2024-10-01 06:15:22.150235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:56.789 [2024-10-01 06:15:22.150278] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1ca3670 was disconnected and freed. reset controller. 00:22:56.789 [2024-10-01 06:15:22.150534] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:56.789 [2024-10-01 06:15:22.150621] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c82630 (9): Bad file descriptor 00:22:56.789 [2024-10-01 06:15:22.150719] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:22:56.790 [2024-10-01 06:15:22.150740] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c82630 with addr=10.0.0.3, port=4420 00:22:56.790 [2024-10-01 06:15:22.150751] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c82630 is same with the state(6) to be set 00:22:56.790 [2024-10-01 06:15:22.150768] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c82630 (9): Bad file descriptor 00:22:56.790 [2024-10-01 06:15:22.150783] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:56.790 [2024-10-01 06:15:22.150792] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:56.790 [2024-10-01 06:15:22.150802] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:56.790 [2024-10-01 06:15:22.150823] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:56.790 [2024-10-01 06:15:22.150834] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:56.790 06:15:22 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@56 -- # sleep 2 00:22:58.667 4043.00 IOPS, 15.79 MiB/s 2695.33 IOPS, 10.53 MiB/s [2024-10-01 06:15:24.151011] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:22:58.667 [2024-10-01 06:15:24.151095] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c82630 with addr=10.0.0.3, port=4420 00:22:58.667 [2024-10-01 06:15:24.151111] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c82630 is same with the state(6) to be set 00:22:58.667 [2024-10-01 06:15:24.151150] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c82630 (9): Bad file descriptor 00:22:58.667 [2024-10-01 06:15:24.151168] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:58.667 [2024-10-01 06:15:24.151178] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:58.667 [2024-10-01 06:15:24.151188] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:58.667 [2024-10-01 06:15:24.151212] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:58.667 [2024-10-01 06:15:24.151224] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:58.667 06:15:24 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@57 -- # get_controller 00:22:58.667 06:15:24 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:58.667 06:15:24 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:22:58.947 06:15:24 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@57 -- # [[ NVMe0 == \N\V\M\e\0 ]] 00:22:58.947 06:15:24 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@58 -- # get_bdev 00:22:58.947 06:15:24 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:22:58.947 06:15:24 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:22:59.210 06:15:24 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@58 -- # [[ NVMe0n1 == \N\V\M\e\0\n\1 ]] 00:22:59.210 06:15:24 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@61 -- # sleep 5 00:23:00.842 2021.50 IOPS, 7.90 MiB/s 1617.20 IOPS, 6.32 MiB/s [2024-10-01 06:15:26.151415] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:23:00.842 [2024-10-01 06:15:26.151496] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c82630 with addr=10.0.0.3, port=4420 00:23:00.842 [2024-10-01 06:15:26.151511] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c82630 is same with the state(6) to be set 00:23:00.842 [2024-10-01 06:15:26.151549] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c82630 (9): Bad file descriptor 00:23:00.842 [2024-10-01 06:15:26.151567] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:00.842 [2024-10-01 06:15:26.151577] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:00.842 [2024-10-01 06:15:26.151587] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:00.842 [2024-10-01 06:15:26.151612] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:00.842 [2024-10-01 06:15:26.151624] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:02.709 1347.67 IOPS, 5.26 MiB/s 1155.14 IOPS, 4.51 MiB/s [2024-10-01 06:15:28.151752] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:02.709 [2024-10-01 06:15:28.151827] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:02.709 [2024-10-01 06:15:28.151854] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:02.709 [2024-10-01 06:15:28.151864] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:23:02.709 [2024-10-01 06:15:28.151902] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:03.644 1010.75 IOPS, 3.95 MiB/s 00:23:03.644 Latency(us) 00:23:03.644 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:03.644 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:03.644 Verification LBA range: start 0x0 length 0x4000 00:23:03.644 NVMe0n1 : 8.10 998.52 3.90 15.81 0.00 126011.19 3961.95 7015926.69 00:23:03.644 =================================================================================================================== 00:23:03.644 Total : 998.52 3.90 15.81 0.00 126011.19 3961.95 7015926.69 00:23:03.644 { 00:23:03.644 "results": [ 00:23:03.644 { 00:23:03.644 "job": "NVMe0n1", 00:23:03.644 "core_mask": "0x4", 00:23:03.644 "workload": "verify", 00:23:03.644 "status": "finished", 00:23:03.644 "verify_range": { 00:23:03.644 "start": 0, 00:23:03.644 "length": 16384 00:23:03.644 }, 00:23:03.644 "queue_depth": 128, 00:23:03.644 "io_size": 4096, 00:23:03.644 "runtime": 8.097967, 00:23:03.644 "iops": 998.5222216884806, 00:23:03.644 "mibps": 3.9004774284706274, 00:23:03.644 "io_failed": 128, 00:23:03.644 "io_timeout": 0, 00:23:03.644 "avg_latency_us": 126011.19285300041, 00:23:03.644 "min_latency_us": 3961.949090909091, 00:23:03.644 "max_latency_us": 7015926.69090909 00:23:03.644 } 00:23:03.644 ], 00:23:03.644 "core_count": 1 00:23:03.644 } 00:23:04.209 06:15:29 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@62 -- # get_controller 00:23:04.209 06:15:29 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:04.209 06:15:29 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:23:04.468 06:15:30 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@62 -- # [[ '' == '' ]] 00:23:04.468 06:15:30 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@63 -- # get_bdev 00:23:04.468 06:15:30 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:23:04.468 06:15:30 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:23:04.727 06:15:30 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@63 -- # [[ '' == '' ]] 00:23:04.727 06:15:30 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@65 -- # wait 96255 00:23:04.727 06:15:30 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@67 -- # killprocess 96239 00:23:04.727 06:15:30 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@950 -- # '[' -z 96239 ']' 00:23:04.727 06:15:30 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # kill -0 96239 00:23:04.727 06:15:30 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@955 -- # uname 00:23:04.727 06:15:30 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:04.727 06:15:30 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 96239 00:23:04.727 06:15:30 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:23:04.727 06:15:30 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:23:04.727 killing process with pid 96239 00:23:04.727 06:15:30 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@968 -- # echo 'killing process with pid 96239' 00:23:04.727 06:15:30 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@969 -- # kill 96239 00:23:04.727 Received shutdown signal, test time was about 9.245701 seconds 00:23:04.727 00:23:04.727 Latency(us) 00:23:04.727 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:04.727 =================================================================================================================== 00:23:04.727 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:04.727 06:15:30 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@974 -- # wait 96239 00:23:04.987 06:15:30 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:23:05.245 [2024-10-01 06:15:30.670012] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:23:05.245 06:15:30 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@74 -- # bdevperf_pid=96376 00:23:05.245 06:15:30 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:23:05.245 06:15:30 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@76 -- # waitforlisten 96376 /var/tmp/bdevperf.sock 00:23:05.245 06:15:30 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@831 -- # '[' -z 96376 ']' 00:23:05.245 06:15:30 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:05.245 06:15:30 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:05.245 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:05.245 06:15:30 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:05.245 06:15:30 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:05.245 06:15:30 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:23:05.245 [2024-10-01 06:15:30.733042] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:23:05.245 [2024-10-01 06:15:30.733144] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96376 ] 00:23:05.503 [2024-10-01 06:15:30.868215] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:05.503 [2024-10-01 06:15:30.904535] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:23:05.503 [2024-10-01 06:15:30.934533] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:23:05.503 06:15:30 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:05.503 06:15:30 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # return 0 00:23:05.503 06:15:30 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:23:05.762 06:15:31 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --fast-io-fail-timeout-sec 2 --reconnect-delay-sec 1 00:23:06.021 NVMe0n1 00:23:06.021 06:15:31 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@84 -- # rpc_pid=96387 00:23:06.021 06:15:31 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@86 -- # sleep 1 00:23:06.021 06:15:31 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@83 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:06.281 Running I/O for 10 seconds... 00:23:07.218 06:15:32 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:23:07.218 9124.00 IOPS, 35.64 MiB/s [2024-10-01 06:15:32.829119] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a3d50 is same with the state(6) to be set 00:23:07.218 [2024-10-01 06:15:32.829165] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a3d50 is same with the state(6) to be set 00:23:07.218 [2024-10-01 06:15:32.829177] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a3d50 is same with the state(6) to be set 00:23:07.218 [2024-10-01 06:15:32.829186] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a3d50 is same with the state(6) to be set 00:23:07.218 [2024-10-01 06:15:32.829195] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a3d50 is same with the state(6) to be set 00:23:07.218 [2024-10-01 06:15:32.829821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:83344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.218 [2024-10-01 06:15:32.829879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.218 [2024-10-01 06:15:32.829903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:83352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.218 [2024-10-01 06:15:32.829931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.218 [2024-10-01 06:15:32.829945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:83360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.218 [2024-10-01 06:15:32.829956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.218 [2024-10-01 06:15:32.829967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:83368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.218 [2024-10-01 06:15:32.829977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.218 [2024-10-01 06:15:32.829989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:83376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.218 [2024-10-01 06:15:32.829999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.218 [2024-10-01 06:15:32.830010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:83384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.218 [2024-10-01 06:15:32.830020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.218 [2024-10-01 06:15:32.830032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:83712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.218 [2024-10-01 06:15:32.830041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.218 [2024-10-01 06:15:32.830053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:83720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.218 [2024-10-01 06:15:32.830062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.218 [2024-10-01 06:15:32.830074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:83728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.218 [2024-10-01 06:15:32.830083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.218 [2024-10-01 06:15:32.830095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:83736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.218 [2024-10-01 06:15:32.830104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.218 [2024-10-01 06:15:32.830116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:83744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.218 [2024-10-01 06:15:32.830125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.218 [2024-10-01 06:15:32.830137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:83752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.218 [2024-10-01 06:15:32.830146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.218 [2024-10-01 06:15:32.830158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:83760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.218 [2024-10-01 06:15:32.830167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.218 [2024-10-01 06:15:32.830189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:83768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.218 [2024-10-01 06:15:32.830199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.218 [2024-10-01 06:15:32.830211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:83392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.218 [2024-10-01 06:15:32.830220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.218 [2024-10-01 06:15:32.830231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:83400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.218 [2024-10-01 06:15:32.830241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.218 [2024-10-01 06:15:32.830252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:83408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.218 [2024-10-01 06:15:32.830264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.218 [2024-10-01 06:15:32.830276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:83416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.219 [2024-10-01 06:15:32.830286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.219 [2024-10-01 06:15:32.830297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:83424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.219 [2024-10-01 06:15:32.830307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.219 [2024-10-01 06:15:32.830318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:83432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.219 [2024-10-01 06:15:32.830328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.219 [2024-10-01 06:15:32.830339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:83440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.219 [2024-10-01 06:15:32.830348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.219 [2024-10-01 06:15:32.830360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:83448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.219 [2024-10-01 06:15:32.830369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.219 [2024-10-01 06:15:32.830381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:83776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.219 [2024-10-01 06:15:32.830390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.219 [2024-10-01 06:15:32.830402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:83784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.219 [2024-10-01 06:15:32.830411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.219 [2024-10-01 06:15:32.830423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:83792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.219 [2024-10-01 06:15:32.830432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.219 [2024-10-01 06:15:32.830444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:83800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.219 [2024-10-01 06:15:32.830453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.219 [2024-10-01 06:15:32.830465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:83808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.219 [2024-10-01 06:15:32.830474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.219 [2024-10-01 06:15:32.830500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:83816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.219 [2024-10-01 06:15:32.830510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.219 [2024-10-01 06:15:32.830521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:83824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.219 [2024-10-01 06:15:32.830530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.219 [2024-10-01 06:15:32.830541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:83832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.219 [2024-10-01 06:15:32.830550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.219 [2024-10-01 06:15:32.830562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:83840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.219 [2024-10-01 06:15:32.830571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.219 [2024-10-01 06:15:32.830582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:83848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.219 [2024-10-01 06:15:32.830591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.219 [2024-10-01 06:15:32.830602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:83856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.219 [2024-10-01 06:15:32.830613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.219 [2024-10-01 06:15:32.830624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:83864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.219 [2024-10-01 06:15:32.830633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.219 [2024-10-01 06:15:32.830645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:83872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.219 [2024-10-01 06:15:32.830654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.219 [2024-10-01 06:15:32.830665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:83880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.219 [2024-10-01 06:15:32.830675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.219 [2024-10-01 06:15:32.830686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:83888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.219 [2024-10-01 06:15:32.830695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.219 [2024-10-01 06:15:32.830706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:83896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.219 [2024-10-01 06:15:32.830715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.219 [2024-10-01 06:15:32.830727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:83904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.219 [2024-10-01 06:15:32.830736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.219 [2024-10-01 06:15:32.830747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:83912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.219 [2024-10-01 06:15:32.830756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.219 [2024-10-01 06:15:32.830766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:83920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.219 [2024-10-01 06:15:32.830776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.219 [2024-10-01 06:15:32.830787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:83928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.219 [2024-10-01 06:15:32.830796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.219 [2024-10-01 06:15:32.830808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:83936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.219 [2024-10-01 06:15:32.830818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.219 [2024-10-01 06:15:32.830829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:83944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.219 [2024-10-01 06:15:32.830838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.219 [2024-10-01 06:15:32.830849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:83952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.219 [2024-10-01 06:15:32.830858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.219 [2024-10-01 06:15:32.830869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:83960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.219 [2024-10-01 06:15:32.830879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.219 [2024-10-01 06:15:32.830905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:83456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.219 [2024-10-01 06:15:32.830915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.219 [2024-10-01 06:15:32.830935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:83464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.219 [2024-10-01 06:15:32.830947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.219 [2024-10-01 06:15:32.830959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:83472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.219 [2024-10-01 06:15:32.830969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.219 [2024-10-01 06:15:32.830980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:83480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.219 [2024-10-01 06:15:32.830990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.219 [2024-10-01 06:15:32.831001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:83488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.219 [2024-10-01 06:15:32.831010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.219 [2024-10-01 06:15:32.831022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:83496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.219 [2024-10-01 06:15:32.831032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.219 [2024-10-01 06:15:32.831044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:83504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.219 [2024-10-01 06:15:32.831053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.219 [2024-10-01 06:15:32.831064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:83512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.219 [2024-10-01 06:15:32.831074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.219 [2024-10-01 06:15:32.831085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:83968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.219 [2024-10-01 06:15:32.831095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.219 [2024-10-01 06:15:32.831106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:83976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.219 [2024-10-01 06:15:32.831116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.219 [2024-10-01 06:15:32.831127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:83984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.219 [2024-10-01 06:15:32.831137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.219 [2024-10-01 06:15:32.831148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:83992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.219 [2024-10-01 06:15:32.831158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.219 [2024-10-01 06:15:32.831169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:84000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.219 [2024-10-01 06:15:32.831179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.219 [2024-10-01 06:15:32.831190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:84008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.219 [2024-10-01 06:15:32.831199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.219 [2024-10-01 06:15:32.831210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:84016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.219 [2024-10-01 06:15:32.831220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.219 [2024-10-01 06:15:32.831231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:84024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.219 [2024-10-01 06:15:32.831241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.219 [2024-10-01 06:15:32.831252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:84032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.219 [2024-10-01 06:15:32.831261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.219 [2024-10-01 06:15:32.831274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:84040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.219 [2024-10-01 06:15:32.831285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.219 [2024-10-01 06:15:32.831297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:84048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.219 [2024-10-01 06:15:32.831306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.219 [2024-10-01 06:15:32.831318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:84056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.219 [2024-10-01 06:15:32.831327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.219 [2024-10-01 06:15:32.831339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:84064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.219 [2024-10-01 06:15:32.831348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.219 [2024-10-01 06:15:32.831360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:84072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.219 [2024-10-01 06:15:32.831369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.219 [2024-10-01 06:15:32.831381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:84080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.219 [2024-10-01 06:15:32.831390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.219 [2024-10-01 06:15:32.831402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:84088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.219 [2024-10-01 06:15:32.831411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.219 [2024-10-01 06:15:32.831423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:83520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.219 [2024-10-01 06:15:32.831432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.219 [2024-10-01 06:15:32.831444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:83528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.219 [2024-10-01 06:15:32.831453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.219 [2024-10-01 06:15:32.831465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:83536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.219 [2024-10-01 06:15:32.831474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.219 [2024-10-01 06:15:32.831486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:83544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.219 [2024-10-01 06:15:32.831495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.219 [2024-10-01 06:15:32.831507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:83552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.219 [2024-10-01 06:15:32.831517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.219 [2024-10-01 06:15:32.831528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:83560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.219 [2024-10-01 06:15:32.831538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.219 [2024-10-01 06:15:32.831549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:83568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.219 [2024-10-01 06:15:32.831559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.219 [2024-10-01 06:15:32.831571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:83576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.219 [2024-10-01 06:15:32.831580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.219 [2024-10-01 06:15:32.831591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:84096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.219 [2024-10-01 06:15:32.831601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.219 [2024-10-01 06:15:32.831613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:84104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.219 [2024-10-01 06:15:32.831624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.219 [2024-10-01 06:15:32.831635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:84112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.219 [2024-10-01 06:15:32.831652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.219 [2024-10-01 06:15:32.831664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:84120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.219 [2024-10-01 06:15:32.831674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.219 [2024-10-01 06:15:32.831686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:84128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.219 [2024-10-01 06:15:32.831695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.219 [2024-10-01 06:15:32.831707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:84136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.219 [2024-10-01 06:15:32.831716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.219 [2024-10-01 06:15:32.831728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:84144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.219 [2024-10-01 06:15:32.831737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.219 [2024-10-01 06:15:32.831749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:84152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.219 [2024-10-01 06:15:32.831759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.219 [2024-10-01 06:15:32.831771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:84160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.219 [2024-10-01 06:15:32.831780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.219 [2024-10-01 06:15:32.831791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:84168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.219 [2024-10-01 06:15:32.831801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.219 [2024-10-01 06:15:32.831821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:84176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.219 [2024-10-01 06:15:32.831830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.219 [2024-10-01 06:15:32.831842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:84184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.219 [2024-10-01 06:15:32.831851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.219 [2024-10-01 06:15:32.831863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:84192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.219 [2024-10-01 06:15:32.831872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.219 [2024-10-01 06:15:32.831883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:84200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.219 [2024-10-01 06:15:32.831893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.219 [2024-10-01 06:15:32.831923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:84208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.219 [2024-10-01 06:15:32.831935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.219 [2024-10-01 06:15:32.831947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:84216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.219 [2024-10-01 06:15:32.831956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.220 [2024-10-01 06:15:32.831968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:84224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.220 [2024-10-01 06:15:32.831977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.220 [2024-10-01 06:15:32.831989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:84232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.220 [2024-10-01 06:15:32.831999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.220 [2024-10-01 06:15:32.832010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:84240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.220 [2024-10-01 06:15:32.832022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.220 [2024-10-01 06:15:32.832035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:84248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:07.220 [2024-10-01 06:15:32.832045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.220 [2024-10-01 06:15:32.832056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:83584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.220 [2024-10-01 06:15:32.832066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.220 [2024-10-01 06:15:32.832077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:83592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.220 [2024-10-01 06:15:32.832086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.220 [2024-10-01 06:15:32.832098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:83600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.220 [2024-10-01 06:15:32.832107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.220 [2024-10-01 06:15:32.832119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:83608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.220 [2024-10-01 06:15:32.832128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.220 [2024-10-01 06:15:32.832140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:83616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.220 [2024-10-01 06:15:32.832149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.220 [2024-10-01 06:15:32.832161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:83624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.220 [2024-10-01 06:15:32.832170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.220 [2024-10-01 06:15:32.832182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:83632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.220 [2024-10-01 06:15:32.832191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.220 [2024-10-01 06:15:32.832203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:83640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.220 [2024-10-01 06:15:32.832212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.220 [2024-10-01 06:15:32.832224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:83648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.220 [2024-10-01 06:15:32.832233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.220 [2024-10-01 06:15:32.832249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:83656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.220 [2024-10-01 06:15:32.832273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.220 [2024-10-01 06:15:32.832284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:83664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.220 [2024-10-01 06:15:32.832293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.220 [2024-10-01 06:15:32.832304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:83672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.220 [2024-10-01 06:15:32.832313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.220 [2024-10-01 06:15:32.832325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:83680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.220 [2024-10-01 06:15:32.832334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.220 [2024-10-01 06:15:32.832351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:83688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.220 [2024-10-01 06:15:32.832360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.220 [2024-10-01 06:15:32.832371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:83696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:07.220 [2024-10-01 06:15:32.832382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.220 [2024-10-01 06:15:32.832393] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243b9c0 is same with the state(6) to be set 00:23:07.220 [2024-10-01 06:15:32.832405] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:07.220 [2024-10-01 06:15:32.832413] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:07.220 [2024-10-01 06:15:32.832421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:83704 len:8 PRP1 0x0 PRP2 0x0 00:23:07.220 [2024-10-01 06:15:32.832430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.220 [2024-10-01 06:15:32.832440] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:07.220 [2024-10-01 06:15:32.832448] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:07.220 [2024-10-01 06:15:32.832456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84256 len:8 PRP1 0x0 PRP2 0x0 00:23:07.220 [2024-10-01 06:15:32.832480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.220 [2024-10-01 06:15:32.832491] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:07.220 [2024-10-01 06:15:32.832499] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:07.220 [2024-10-01 06:15:32.832507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84264 len:8 PRP1 0x0 PRP2 0x0 00:23:07.220 [2024-10-01 06:15:32.832516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.220 [2024-10-01 06:15:32.832526] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:07.220 [2024-10-01 06:15:32.832533] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:07.220 [2024-10-01 06:15:32.832541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84272 len:8 PRP1 0x0 PRP2 0x0 00:23:07.220 [2024-10-01 06:15:32.832550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.220 [2024-10-01 06:15:32.832560] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:07.220 [2024-10-01 06:15:32.832567] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:07.220 [2024-10-01 06:15:32.832575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84280 len:8 PRP1 0x0 PRP2 0x0 00:23:07.220 [2024-10-01 06:15:32.832586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.220 [2024-10-01 06:15:32.832596] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:07.220 [2024-10-01 06:15:32.832603] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:07.220 [2024-10-01 06:15:32.832611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84288 len:8 PRP1 0x0 PRP2 0x0 00:23:07.220 [2024-10-01 06:15:32.832620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.220 [2024-10-01 06:15:32.832630] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:07.220 [2024-10-01 06:15:32.832638] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:07.220 [2024-10-01 06:15:32.832646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84296 len:8 PRP1 0x0 PRP2 0x0 00:23:07.220 [2024-10-01 06:15:32.832655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.220 [2024-10-01 06:15:32.832665] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:07.220 [2024-10-01 06:15:32.832672] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:07.220 [2024-10-01 06:15:32.832682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84304 len:8 PRP1 0x0 PRP2 0x0 00:23:07.220 [2024-10-01 06:15:32.832692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.220 [2024-10-01 06:15:32.832702] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:07.220 [2024-10-01 06:15:32.832709] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:07.220 [2024-10-01 06:15:32.832717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84312 len:8 PRP1 0x0 PRP2 0x0 00:23:07.220 [2024-10-01 06:15:32.832726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.220 [2024-10-01 06:15:32.832735] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:07.220 [2024-10-01 06:15:32.832743] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:07.220 [2024-10-01 06:15:32.832751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84320 len:8 PRP1 0x0 PRP2 0x0 00:23:07.220 [2024-10-01 06:15:32.832759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.220 [2024-10-01 06:15:32.832769] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:07.220 [2024-10-01 06:15:32.832777] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:07.220 [2024-10-01 06:15:32.832785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84328 len:8 PRP1 0x0 PRP2 0x0 00:23:07.220 [2024-10-01 06:15:32.832794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.220 [2024-10-01 06:15:32.832803] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:07.220 [2024-10-01 06:15:32.832811] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:07.220 [2024-10-01 06:15:32.832819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84336 len:8 PRP1 0x0 PRP2 0x0 00:23:07.220 [2024-10-01 06:15:32.832828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.220 [2024-10-01 06:15:32.832838] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:07.220 [2024-10-01 06:15:32.832846] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:07.220 [2024-10-01 06:15:32.832854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84344 len:8 PRP1 0x0 PRP2 0x0 00:23:07.220 [2024-10-01 06:15:32.832865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.220 [2024-10-01 06:15:32.832884] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:07.220 [2024-10-01 06:15:32.832891] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:07.220 [2024-10-01 06:15:32.832899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84352 len:8 PRP1 0x0 PRP2 0x0 00:23:07.220 [2024-10-01 06:15:32.832909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.220 [2024-10-01 06:15:32.832918] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:07.220 [2024-10-01 06:15:32.832935] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:07.220 [2024-10-01 06:15:32.832945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:84360 len:8 PRP1 0x0 PRP2 0x0 00:23:07.220 [2024-10-01 06:15:32.832954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:07.220 [2024-10-01 06:15:32.832996] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x243b9c0 was disconnected and freed. reset controller. 00:23:07.220 [2024-10-01 06:15:32.833257] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:07.220 [2024-10-01 06:15:32.833349] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241a8b0 (9): Bad file descriptor 00:23:07.478 [2024-10-01 06:15:32.833466] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:23:07.478 [2024-10-01 06:15:32.833489] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241a8b0 with addr=10.0.0.3, port=4420 00:23:07.478 [2024-10-01 06:15:32.833501] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a8b0 is same with the state(6) to be set 00:23:07.478 [2024-10-01 06:15:32.833519] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241a8b0 (9): Bad file descriptor 00:23:07.478 [2024-10-01 06:15:32.833536] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:07.478 [2024-10-01 06:15:32.833545] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:07.478 [2024-10-01 06:15:32.833556] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:07.478 [2024-10-01 06:15:32.833577] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:07.478 [2024-10-01 06:15:32.833588] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:07.478 06:15:32 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@90 -- # sleep 1 00:23:08.410 5209.00 IOPS, 20.35 MiB/s [2024-10-01 06:15:33.833719] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:23:08.410 [2024-10-01 06:15:33.833784] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241a8b0 with addr=10.0.0.3, port=4420 00:23:08.410 [2024-10-01 06:15:33.833800] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a8b0 is same with the state(6) to be set 00:23:08.410 [2024-10-01 06:15:33.833822] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241a8b0 (9): Bad file descriptor 00:23:08.410 [2024-10-01 06:15:33.833840] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:08.410 [2024-10-01 06:15:33.833860] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:08.410 [2024-10-01 06:15:33.833871] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:08.410 [2024-10-01 06:15:33.833895] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:08.410 [2024-10-01 06:15:33.833939] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:08.410 06:15:33 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:23:08.668 [2024-10-01 06:15:34.085936] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:23:08.668 06:15:34 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@92 -- # wait 96387 00:23:09.233 3472.67 IOPS, 13.57 MiB/s [2024-10-01 06:15:34.849747] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:23:16.412 2604.50 IOPS, 10.17 MiB/s 3770.60 IOPS, 14.73 MiB/s 4792.17 IOPS, 18.72 MiB/s 5515.57 IOPS, 21.55 MiB/s 6050.12 IOPS, 23.63 MiB/s 6465.89 IOPS, 25.26 MiB/s 6794.50 IOPS, 26.54 MiB/s 00:23:16.412 Latency(us) 00:23:16.412 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:16.412 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:16.412 Verification LBA range: start 0x0 length 0x4000 00:23:16.412 NVMe0n1 : 10.01 6799.69 26.56 0.00 0.00 18789.10 1303.27 3019898.88 00:23:16.412 =================================================================================================================== 00:23:16.412 Total : 6799.69 26.56 0.00 0.00 18789.10 1303.27 3019898.88 00:23:16.412 { 00:23:16.412 "results": [ 00:23:16.412 { 00:23:16.412 "job": "NVMe0n1", 00:23:16.412 "core_mask": "0x4", 00:23:16.412 "workload": "verify", 00:23:16.412 "status": "finished", 00:23:16.412 "verify_range": { 00:23:16.412 "start": 0, 00:23:16.412 "length": 16384 00:23:16.412 }, 00:23:16.412 "queue_depth": 128, 00:23:16.412 "io_size": 4096, 00:23:16.412 "runtime": 10.007667, 00:23:16.412 "iops": 6799.686680222274, 00:23:16.412 "mibps": 26.561276094618258, 00:23:16.412 "io_failed": 0, 00:23:16.412 "io_timeout": 0, 00:23:16.412 "avg_latency_us": 18789.095025482973, 00:23:16.412 "min_latency_us": 1303.2727272727273, 00:23:16.412 "max_latency_us": 3019898.88 00:23:16.412 } 00:23:16.412 ], 00:23:16.412 "core_count": 1 00:23:16.412 } 00:23:16.412 06:15:41 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@97 -- # rpc_pid=96496 00:23:16.412 06:15:41 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@96 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:16.412 06:15:41 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@98 -- # sleep 1 00:23:16.412 Running I/O for 10 seconds... 00:23:17.349 06:15:42 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:23:17.611 7317.00 IOPS, 28.58 MiB/s [2024-10-01 06:15:42.987482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:66720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.611 [2024-10-01 06:15:42.987546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.611 [2024-10-01 06:15:42.987584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:66848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.611 [2024-10-01 06:15:42.987600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.611 [2024-10-01 06:15:42.987611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:66856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.611 [2024-10-01 06:15:42.987620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.611 [2024-10-01 06:15:42.987631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:66864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.611 [2024-10-01 06:15:42.987640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.611 [2024-10-01 06:15:42.987650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:66872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.611 [2024-10-01 06:15:42.987658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.611 [2024-10-01 06:15:42.987668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:66880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.611 [2024-10-01 06:15:42.987677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.611 [2024-10-01 06:15:42.987688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:66888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.611 [2024-10-01 06:15:42.987696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.611 [2024-10-01 06:15:42.987707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:66896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.611 [2024-10-01 06:15:42.987715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.611 [2024-10-01 06:15:42.987725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:66904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.611 [2024-10-01 06:15:42.987734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.611 [2024-10-01 06:15:42.987744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:66912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.611 [2024-10-01 06:15:42.987752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.611 [2024-10-01 06:15:42.987762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:66920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.611 [2024-10-01 06:15:42.987771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.611 [2024-10-01 06:15:42.987781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:66928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.611 [2024-10-01 06:15:42.987805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.611 [2024-10-01 06:15:42.987832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:66936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.611 [2024-10-01 06:15:42.987841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.611 [2024-10-01 06:15:42.987852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:66944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.611 [2024-10-01 06:15:42.987861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.611 [2024-10-01 06:15:42.987872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:66952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.611 [2024-10-01 06:15:42.987881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.611 [2024-10-01 06:15:42.987892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:66960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.611 [2024-10-01 06:15:42.987901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.611 [2024-10-01 06:15:42.987967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:66968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.611 [2024-10-01 06:15:42.987978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.611 [2024-10-01 06:15:42.987993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:66976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.611 [2024-10-01 06:15:42.988003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.611 [2024-10-01 06:15:42.988015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:66984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.611 [2024-10-01 06:15:42.988024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.611 [2024-10-01 06:15:42.988036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:66992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.611 [2024-10-01 06:15:42.988046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.611 [2024-10-01 06:15:42.988057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:67000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.611 [2024-10-01 06:15:42.988067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.611 [2024-10-01 06:15:42.988078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:67008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.611 [2024-10-01 06:15:42.988088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.611 [2024-10-01 06:15:42.988099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:67016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.611 [2024-10-01 06:15:42.988109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.612 [2024-10-01 06:15:42.988120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:67024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.612 [2024-10-01 06:15:42.988129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.612 [2024-10-01 06:15:42.988141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:67032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.612 [2024-10-01 06:15:42.988150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.612 [2024-10-01 06:15:42.988161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:67040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.612 [2024-10-01 06:15:42.988170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.612 [2024-10-01 06:15:42.988182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:67048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.612 [2024-10-01 06:15:42.988191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.612 [2024-10-01 06:15:42.988209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:67056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.612 [2024-10-01 06:15:42.988218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.612 [2024-10-01 06:15:42.988229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:67064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.612 [2024-10-01 06:15:42.988239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.612 [2024-10-01 06:15:42.988251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:67072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.612 [2024-10-01 06:15:42.988261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.612 [2024-10-01 06:15:42.988287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:67080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.612 [2024-10-01 06:15:42.988296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.612 [2024-10-01 06:15:42.988307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:67088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.612 [2024-10-01 06:15:42.988331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.612 [2024-10-01 06:15:42.988358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:67096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.612 [2024-10-01 06:15:42.988367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.612 [2024-10-01 06:15:42.988379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:67104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.612 [2024-10-01 06:15:42.988389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.612 [2024-10-01 06:15:42.988400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:67112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.612 [2024-10-01 06:15:42.988409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.612 [2024-10-01 06:15:42.988436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:67120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.612 [2024-10-01 06:15:42.988446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.612 [2024-10-01 06:15:42.988458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:67128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.612 [2024-10-01 06:15:42.988467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.612 [2024-10-01 06:15:42.988479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:67136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.612 [2024-10-01 06:15:42.988488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.612 [2024-10-01 06:15:42.988499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:67144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.612 [2024-10-01 06:15:42.988509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.612 [2024-10-01 06:15:42.988520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:67152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.612 [2024-10-01 06:15:42.988530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.612 [2024-10-01 06:15:42.988541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:67160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.612 [2024-10-01 06:15:42.988551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.612 [2024-10-01 06:15:42.988562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:67168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.612 [2024-10-01 06:15:42.988572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.612 [2024-10-01 06:15:42.988583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:67176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.612 [2024-10-01 06:15:42.988593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.612 [2024-10-01 06:15:42.988604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:67184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.612 [2024-10-01 06:15:42.988614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.612 [2024-10-01 06:15:42.988625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:67192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.612 [2024-10-01 06:15:42.988635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.612 [2024-10-01 06:15:42.988646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:67200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.612 [2024-10-01 06:15:42.988656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.612 [2024-10-01 06:15:42.988667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:67208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.612 [2024-10-01 06:15:42.988676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.612 [2024-10-01 06:15:42.988688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:67216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.612 [2024-10-01 06:15:42.988697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.612 [2024-10-01 06:15:42.988708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:67224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.612 [2024-10-01 06:15:42.988718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.612 [2024-10-01 06:15:42.988731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:67232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.612 [2024-10-01 06:15:42.988740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.612 [2024-10-01 06:15:42.988752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:67240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.612 [2024-10-01 06:15:42.988761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.612 [2024-10-01 06:15:42.988773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:67248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.612 [2024-10-01 06:15:42.988782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.612 [2024-10-01 06:15:42.988794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:67256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.612 [2024-10-01 06:15:42.988804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.612 [2024-10-01 06:15:42.988815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:67264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.612 [2024-10-01 06:15:42.988826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.612 [2024-10-01 06:15:42.988837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:67272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.612 [2024-10-01 06:15:42.988847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.612 [2024-10-01 06:15:42.988858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:67280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.612 [2024-10-01 06:15:42.988869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.612 [2024-10-01 06:15:42.988880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:67288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.612 [2024-10-01 06:15:42.988890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.612 [2024-10-01 06:15:42.988901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:67296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.612 [2024-10-01 06:15:42.988911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.612 [2024-10-01 06:15:42.988923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:67304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.613 [2024-10-01 06:15:42.988932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.613 [2024-10-01 06:15:42.988943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:67312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.613 [2024-10-01 06:15:42.988953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.613 [2024-10-01 06:15:42.988975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:67320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.613 [2024-10-01 06:15:42.988985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.613 [2024-10-01 06:15:42.988997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:67328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.613 [2024-10-01 06:15:42.989007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.613 [2024-10-01 06:15:42.989018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:67336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.613 [2024-10-01 06:15:42.989027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.613 [2024-10-01 06:15:42.989039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:67344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.613 [2024-10-01 06:15:42.989049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.613 [2024-10-01 06:15:42.989061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:67352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.613 [2024-10-01 06:15:42.989071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.613 [2024-10-01 06:15:42.989083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:67360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.613 [2024-10-01 06:15:42.989093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.613 [2024-10-01 06:15:42.989105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:67368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.613 [2024-10-01 06:15:42.989114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.613 [2024-10-01 06:15:42.989125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:67376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.613 [2024-10-01 06:15:42.989135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.613 [2024-10-01 06:15:42.989146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:67384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.613 [2024-10-01 06:15:42.989156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.613 [2024-10-01 06:15:42.989168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:67392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.613 [2024-10-01 06:15:42.989177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.613 [2024-10-01 06:15:42.989188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:67400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.613 [2024-10-01 06:15:42.989198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.613 [2024-10-01 06:15:42.989209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:67408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.613 [2024-10-01 06:15:42.989220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.613 [2024-10-01 06:15:42.989231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:67416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.613 [2024-10-01 06:15:42.989241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.613 [2024-10-01 06:15:42.989252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:67424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.613 [2024-10-01 06:15:42.989262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.613 [2024-10-01 06:15:42.989273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:67432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.613 [2024-10-01 06:15:42.989283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.613 [2024-10-01 06:15:42.989294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:67440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.613 [2024-10-01 06:15:42.989303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.613 [2024-10-01 06:15:42.989315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:67448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.613 [2024-10-01 06:15:42.989324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.613 [2024-10-01 06:15:42.989335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:67456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.613 [2024-10-01 06:15:42.989345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.613 [2024-10-01 06:15:42.989356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:67464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.613 [2024-10-01 06:15:42.989366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.613 [2024-10-01 06:15:42.989377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:67472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.613 [2024-10-01 06:15:42.989387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.613 [2024-10-01 06:15:42.989399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:67480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.613 [2024-10-01 06:15:42.989408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.613 [2024-10-01 06:15:42.989422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:67488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.613 [2024-10-01 06:15:42.989431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.613 [2024-10-01 06:15:42.989443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:67496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.613 [2024-10-01 06:15:42.989452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.613 [2024-10-01 06:15:42.989464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:67504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.613 [2024-10-01 06:15:42.989474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.613 [2024-10-01 06:15:42.989485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:67512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.613 [2024-10-01 06:15:42.989494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.613 [2024-10-01 06:15:42.989506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:67520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.613 [2024-10-01 06:15:42.989515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.613 [2024-10-01 06:15:42.989526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:67528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.613 [2024-10-01 06:15:42.989536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.613 [2024-10-01 06:15:42.989547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:67536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.613 [2024-10-01 06:15:42.989557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.613 [2024-10-01 06:15:42.989568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:67544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.613 [2024-10-01 06:15:42.989578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.613 [2024-10-01 06:15:42.989589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:67552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.613 [2024-10-01 06:15:42.989599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.613 [2024-10-01 06:15:42.989610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:67560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.613 [2024-10-01 06:15:42.989620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.613 [2024-10-01 06:15:42.989631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:67568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.613 [2024-10-01 06:15:42.989641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.613 [2024-10-01 06:15:42.989652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:67576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.613 [2024-10-01 06:15:42.989662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.613 [2024-10-01 06:15:42.989674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:67584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.613 [2024-10-01 06:15:42.989683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.613 [2024-10-01 06:15:42.989696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:67592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.613 [2024-10-01 06:15:42.989705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.613 [2024-10-01 06:15:42.989717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:67600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.613 [2024-10-01 06:15:42.989726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.613 [2024-10-01 06:15:42.989738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:67608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.613 [2024-10-01 06:15:42.989748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.614 [2024-10-01 06:15:42.989760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:67616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.614 [2024-10-01 06:15:42.989769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.614 [2024-10-01 06:15:42.989781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:67624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.614 [2024-10-01 06:15:42.989791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.614 [2024-10-01 06:15:42.989802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:67632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.614 [2024-10-01 06:15:42.989812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.614 [2024-10-01 06:15:42.989824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:67640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.614 [2024-10-01 06:15:42.989833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.614 [2024-10-01 06:15:42.989844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:67648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.614 [2024-10-01 06:15:42.989854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.614 [2024-10-01 06:15:42.989865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:67656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.614 [2024-10-01 06:15:42.989875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.614 [2024-10-01 06:15:42.989886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:67664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.614 [2024-10-01 06:15:42.989905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.614 [2024-10-01 06:15:42.989918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:67672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.614 [2024-10-01 06:15:42.989928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.614 [2024-10-01 06:15:42.989940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:67680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.614 [2024-10-01 06:15:42.989949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.614 [2024-10-01 06:15:42.989961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:67688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.614 [2024-10-01 06:15:42.989970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.614 [2024-10-01 06:15:42.989982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:67696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.614 [2024-10-01 06:15:42.989998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.614 [2024-10-01 06:15:42.990010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:67704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.614 [2024-10-01 06:15:42.990019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.614 [2024-10-01 06:15:42.990031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:67712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.614 [2024-10-01 06:15:42.990040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.614 [2024-10-01 06:15:42.990052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:67720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.614 [2024-10-01 06:15:42.990061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.614 [2024-10-01 06:15:42.990073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:66728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.614 [2024-10-01 06:15:42.990082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.614 [2024-10-01 06:15:42.990094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:66736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.614 [2024-10-01 06:15:42.990104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.614 [2024-10-01 06:15:42.990122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:66744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.614 [2024-10-01 06:15:42.990132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.614 [2024-10-01 06:15:42.990143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:66752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.614 [2024-10-01 06:15:42.990153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.614 [2024-10-01 06:15:42.990164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:66760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.614 [2024-10-01 06:15:42.990174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.614 [2024-10-01 06:15:42.990186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:66768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.614 [2024-10-01 06:15:42.990195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.614 [2024-10-01 06:15:42.990207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:66776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.614 [2024-10-01 06:15:42.990217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.614 [2024-10-01 06:15:42.990229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:66784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.614 [2024-10-01 06:15:42.990238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.614 [2024-10-01 06:15:42.990250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:66792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.614 [2024-10-01 06:15:42.990260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.614 [2024-10-01 06:15:42.990271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:66800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.614 [2024-10-01 06:15:42.990281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.614 [2024-10-01 06:15:42.990293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:66808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.614 [2024-10-01 06:15:42.990302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.614 [2024-10-01 06:15:42.990314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:66816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.614 [2024-10-01 06:15:42.990323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.614 [2024-10-01 06:15:42.990335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:66824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.614 [2024-10-01 06:15:42.990346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.614 [2024-10-01 06:15:42.990358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:66832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.614 [2024-10-01 06:15:42.990368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.614 [2024-10-01 06:15:42.990379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:66840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:17.614 [2024-10-01 06:15:42.990389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.614 [2024-10-01 06:15:42.990401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:67728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:17.614 [2024-10-01 06:15:42.990410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.614 [2024-10-01 06:15:42.990421] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x243ce40 is same with the state(6) to be set 00:23:17.614 [2024-10-01 06:15:42.990433] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:17.614 [2024-10-01 06:15:42.990441] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:17.614 [2024-10-01 06:15:42.990450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:67736 len:8 PRP1 0x0 PRP2 0x0 00:23:17.614 [2024-10-01 06:15:42.990461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.614 [2024-10-01 06:15:42.990505] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x243ce40 was disconnected and freed. reset controller. 00:23:17.614 [2024-10-01 06:15:42.990578] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:17.614 [2024-10-01 06:15:42.990596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.614 [2024-10-01 06:15:42.990607] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:17.614 [2024-10-01 06:15:42.990617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.614 [2024-10-01 06:15:42.990627] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:17.614 [2024-10-01 06:15:42.990637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.614 [2024-10-01 06:15:42.990647] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:17.614 [2024-10-01 06:15:42.990656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:17.614 [2024-10-01 06:15:42.990666] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a8b0 is same with the state(6) to be set 00:23:17.614 [2024-10-01 06:15:42.990888] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:17.614 [2024-10-01 06:15:42.990926] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241a8b0 (9): Bad file descriptor 00:23:17.614 [2024-10-01 06:15:42.991022] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:23:17.614 [2024-10-01 06:15:42.991057] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241a8b0 with addr=10.0.0.3, port=4420 00:23:17.615 [2024-10-01 06:15:42.991070] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a8b0 is same with the state(6) to be set 00:23:17.615 [2024-10-01 06:15:42.991088] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241a8b0 (9): Bad file descriptor 00:23:17.615 [2024-10-01 06:15:42.991105] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:17.615 [2024-10-01 06:15:42.991115] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:17.615 [2024-10-01 06:15:42.991125] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:17.615 [2024-10-01 06:15:42.991146] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:17.615 [2024-10-01 06:15:42.991161] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:17.615 06:15:43 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@101 -- # sleep 3 00:23:18.572 4170.00 IOPS, 16.29 MiB/s [2024-10-01 06:15:43.991256] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:23:18.572 [2024-10-01 06:15:43.991335] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241a8b0 with addr=10.0.0.3, port=4420 00:23:18.572 [2024-10-01 06:15:43.991351] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a8b0 is same with the state(6) to be set 00:23:18.572 [2024-10-01 06:15:43.991376] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241a8b0 (9): Bad file descriptor 00:23:18.572 [2024-10-01 06:15:43.991393] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:18.572 [2024-10-01 06:15:43.991403] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:18.573 [2024-10-01 06:15:43.991413] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:18.573 [2024-10-01 06:15:43.991434] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:18.573 [2024-10-01 06:15:43.991446] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:19.526 2780.00 IOPS, 10.86 MiB/s [2024-10-01 06:15:44.991536] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:23:19.526 [2024-10-01 06:15:44.991597] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241a8b0 with addr=10.0.0.3, port=4420 00:23:19.526 [2024-10-01 06:15:44.991626] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a8b0 is same with the state(6) to be set 00:23:19.526 [2024-10-01 06:15:44.991647] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241a8b0 (9): Bad file descriptor 00:23:19.526 [2024-10-01 06:15:44.991663] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:19.526 [2024-10-01 06:15:44.991672] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:19.526 [2024-10-01 06:15:44.991682] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:19.526 [2024-10-01 06:15:44.991703] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:19.526 [2024-10-01 06:15:44.991714] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:20.463 2085.00 IOPS, 8.14 MiB/s [2024-10-01 06:15:45.995209] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:23:20.463 [2024-10-01 06:15:45.995285] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x241a8b0 with addr=10.0.0.3, port=4420 00:23:20.463 [2024-10-01 06:15:45.995300] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x241a8b0 is same with the state(6) to be set 00:23:20.463 [2024-10-01 06:15:45.995572] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x241a8b0 (9): Bad file descriptor 00:23:20.463 [2024-10-01 06:15:45.995822] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:20.463 [2024-10-01 06:15:45.995844] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:20.463 [2024-10-01 06:15:45.995856] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:20.463 [2024-10-01 06:15:45.999755] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:20.463 [2024-10-01 06:15:45.999802] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:20.463 06:15:46 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:23:20.722 [2024-10-01 06:15:46.272208] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:23:20.722 06:15:46 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@103 -- # wait 96496 00:23:21.548 1668.00 IOPS, 6.52 MiB/s [2024-10-01 06:15:47.039281] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:23:26.664 2733.83 IOPS, 10.68 MiB/s 3743.29 IOPS, 14.62 MiB/s 4507.38 IOPS, 17.61 MiB/s 5095.44 IOPS, 19.90 MiB/s 5558.70 IOPS, 21.71 MiB/s 00:23:26.665 Latency(us) 00:23:26.665 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:26.665 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:26.665 Verification LBA range: start 0x0 length 0x4000 00:23:26.665 NVMe0n1 : 10.01 5566.57 21.74 3777.95 0.00 13671.60 595.78 3019898.88 00:23:26.665 =================================================================================================================== 00:23:26.665 Total : 5566.57 21.74 3777.95 0.00 13671.60 0.00 3019898.88 00:23:26.665 { 00:23:26.665 "results": [ 00:23:26.665 { 00:23:26.665 "job": "NVMe0n1", 00:23:26.665 "core_mask": "0x4", 00:23:26.665 "workload": "verify", 00:23:26.665 "status": "finished", 00:23:26.665 "verify_range": { 00:23:26.665 "start": 0, 00:23:26.665 "length": 16384 00:23:26.665 }, 00:23:26.665 "queue_depth": 128, 00:23:26.665 "io_size": 4096, 00:23:26.665 "runtime": 10.008861, 00:23:26.665 "iops": 5566.5674645696445, 00:23:26.665 "mibps": 21.744404158475174, 00:23:26.665 "io_failed": 37813, 00:23:26.665 "io_timeout": 0, 00:23:26.665 "avg_latency_us": 13671.59510859169, 00:23:26.665 "min_latency_us": 595.7818181818182, 00:23:26.665 "max_latency_us": 3019898.88 00:23:26.665 } 00:23:26.665 ], 00:23:26.665 "core_count": 1 00:23:26.665 } 00:23:26.665 06:15:51 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@105 -- # killprocess 96376 00:23:26.665 06:15:51 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@950 -- # '[' -z 96376 ']' 00:23:26.665 06:15:51 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # kill -0 96376 00:23:26.665 06:15:51 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@955 -- # uname 00:23:26.665 06:15:51 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:26.665 06:15:51 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 96376 00:23:26.665 killing process with pid 96376 00:23:26.665 Received shutdown signal, test time was about 10.000000 seconds 00:23:26.665 00:23:26.665 Latency(us) 00:23:26.665 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:26.665 =================================================================================================================== 00:23:26.665 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:26.665 06:15:51 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:23:26.665 06:15:51 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:23:26.665 06:15:51 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@968 -- # echo 'killing process with pid 96376' 00:23:26.665 06:15:51 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@969 -- # kill 96376 00:23:26.665 06:15:51 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@974 -- # wait 96376 00:23:26.665 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:26.665 06:15:52 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@110 -- # bdevperf_pid=96606 00:23:26.665 06:15:52 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w randread -t 10 -f 00:23:26.665 06:15:52 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@112 -- # waitforlisten 96606 /var/tmp/bdevperf.sock 00:23:26.665 06:15:52 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@831 -- # '[' -z 96606 ']' 00:23:26.665 06:15:52 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:26.665 06:15:52 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:26.665 06:15:52 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:26.665 06:15:52 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:26.665 06:15:52 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:23:26.665 [2024-10-01 06:15:52.117248] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:23:26.665 [2024-10-01 06:15:52.117374] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96606 ] 00:23:26.665 [2024-10-01 06:15:52.255109] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:26.924 [2024-10-01 06:15:52.293794] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:23:26.924 [2024-10-01 06:15:52.324591] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:23:26.924 06:15:52 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:26.924 06:15:52 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # return 0 00:23:26.924 06:15:52 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 96606 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_timeout.bt 00:23:26.924 06:15:52 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@116 -- # dtrace_pid=96615 00:23:26.924 06:15:52 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 -e 9 00:23:27.183 06:15:52 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:23:27.442 NVMe0n1 00:23:27.442 06:15:52 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@124 -- # rpc_pid=96655 00:23:27.442 06:15:52 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@123 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:27.442 06:15:52 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@125 -- # sleep 1 00:23:27.701 Running I/O for 10 seconds... 00:23:28.638 06:15:53 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:23:28.901 15875.00 IOPS, 62.01 MiB/s [2024-10-01 06:15:54.258419] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a1b60 is same with the state(6) to be set 00:23:28.901 [2024-10-01 06:15:54.258466] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a1b60 is same with the state(6) to be set 00:23:28.901 [2024-10-01 06:15:54.258493] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a1b60 is same with the state(6) to be set 00:23:28.901 [2024-10-01 06:15:54.258501] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a1b60 is same with the state(6) to be set 00:23:28.901 [2024-10-01 06:15:54.258509] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a1b60 is same with the state(6) to be set 00:23:28.901 [2024-10-01 06:15:54.258517] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a1b60 is same with the state(6) to be set 00:23:28.901 [2024-10-01 06:15:54.258525] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a1b60 is same with the state(6) to be set 00:23:28.901 [2024-10-01 06:15:54.258533] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a1b60 is same with the state(6) to be set 00:23:28.901 [2024-10-01 06:15:54.258540] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a1b60 is same with the state(6) to be set 00:23:28.901 [2024-10-01 06:15:54.258548] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a1b60 is same with the state(6) to be set 00:23:28.901 [2024-10-01 06:15:54.258555] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a1b60 is same with the state(6) to be set 00:23:28.901 [2024-10-01 06:15:54.258562] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a1b60 is same with the state(6) to be set 00:23:28.901 [2024-10-01 06:15:54.258570] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a1b60 is same with the state(6) to be set 00:23:28.901 [2024-10-01 06:15:54.258578] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a1b60 is same with the state(6) to be set 00:23:28.901 [2024-10-01 06:15:54.258585] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a1b60 is same with the state(6) to be set 00:23:28.901 [2024-10-01 06:15:54.258592] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a1b60 is same with the state(6) to be set 00:23:28.901 [2024-10-01 06:15:54.258600] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a1b60 is same with the state(6) to be set 00:23:28.901 [2024-10-01 06:15:54.258607] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a1b60 is same with the state(6) to be set 00:23:28.901 [2024-10-01 06:15:54.258615] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a1b60 is same with the state(6) to be set 00:23:28.901 [2024-10-01 06:15:54.258622] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a1b60 is same with [2024-10-01 06:15:54.258610] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsthe state(6) to be set 00:23:28.901 id:0 cdw10:00000000 cdw11:00000000 00:23:28.901 [2024-10-01 06:15:54.258630] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a1b60 is same with the state(6) to be set 00:23:28.901 [2024-10-01 06:15:54.258638] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a1b60 is same with the state(6) to be set 00:23:28.901 [2024-10-01 06:15:54.258640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.901 [2024-10-01 06:15:54.258646] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a1b60 is same with the state(6) to be set 00:23:28.901 [2024-10-01 06:15:54.258652] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 ns[2024-10-01 06:15:54.258653] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a1b60 is same with id:0 cdw10:00000000 cdw11:00000000 00:23:28.901 the state(6) to be set 00:23:28.901 [2024-10-01 06:15:54.258662] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a1b60 is same with [2024-10-01 06:15:54.258662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cthe state(6) to be set 00:23:28.901 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.901 [2024-10-01 06:15:54.258670] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a1b60 is same with the state(6) to be set 00:23:28.901 [2024-10-01 06:15:54.258672] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:28.901 [2024-10-01 06:15:54.258678] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a1b60 is same with the state(6) to be set 00:23:28.901 [2024-10-01 06:15:54.258681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.901 [2024-10-01 06:15:54.258685] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a1b60 is same with the state(6) to be set 00:23:28.901 [2024-10-01 06:15:54.258707] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:28.901 [2024-10-01 06:15:54.258709] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a1b60 is same with the state(6) to be set 00:23:28.901 [2024-10-01 06:15:54.258716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.901 [2024-10-01 06:15:54.258717] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a1b60 is same with the state(6) to be set 00:23:28.901 [2024-10-01 06:15:54.258725] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a1b60 is same with [2024-10-01 06:15:54.258725] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2f650 is same the state(6) to be set 00:23:28.901 with the state(6) to be set 00:23:28.901 [2024-10-01 06:15:54.258734] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a1b60 is same with the state(6) to be set 00:23:28.901 [2024-10-01 06:15:54.258742] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a1b60 is same with the state(6) to be set 00:23:28.901 [2024-10-01 06:15:54.258766] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a1b60 is same with the state(6) to be set 00:23:28.901 [2024-10-01 06:15:54.258775] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a1b60 is same with the state(6) to be set 00:23:28.901 [2024-10-01 06:15:54.258783] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a1b60 is same with the state(6) to be set 00:23:28.901 [2024-10-01 06:15:54.258791] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a1b60 is same with the state(6) to be set 00:23:28.901 [2024-10-01 06:15:54.258807] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a1b60 is same with the state(6) to be set 00:23:28.901 [2024-10-01 06:15:54.258815] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a1b60 is same with the state(6) to be set 00:23:28.901 [2024-10-01 06:15:54.258823] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a1b60 is same with the state(6) to be set 00:23:28.901 [2024-10-01 06:15:54.258831] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a1b60 is same with the state(6) to be set 00:23:28.901 [2024-10-01 06:15:54.258839] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a1b60 is same with the state(6) to be set 00:23:28.901 [2024-10-01 06:15:54.258847] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a1b60 is same with the state(6) to be set 00:23:28.901 [2024-10-01 06:15:54.258860] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a1b60 is same with the state(6) to be set 00:23:28.901 [2024-10-01 06:15:54.258868] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a1b60 is same with the state(6) to be set 00:23:28.902 [2024-10-01 06:15:54.258876] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a1b60 is same with the state(6) to be set 00:23:28.902 [2024-10-01 06:15:54.258884] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a1b60 is same with the state(6) to be set 00:23:28.902 [2024-10-01 06:15:54.258892] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a1b60 is same with the state(6) to be set 00:23:28.902 [2024-10-01 06:15:54.258918] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a1b60 is same with the state(6) to be set 00:23:28.902 [2024-10-01 06:15:54.258927] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a1b60 is same with the state(6) to be set 00:23:28.902 [2024-10-01 06:15:54.258935] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a1b60 is same with the state(6) to be set 00:23:28.902 [2024-10-01 06:15:54.258943] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a1b60 is same with the state(6) to be set 00:23:28.902 [2024-10-01 06:15:54.258951] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a1b60 is same with the state(6) to be set 00:23:28.902 [2024-10-01 06:15:54.258959] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a1b60 is same with the state(6) to be set 00:23:28.902 [2024-10-01 06:15:54.258967] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a1b60 is same with the state(6) to be set 00:23:28.902 [2024-10-01 06:15:54.258975] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a1b60 is same with the state(6) to be set 00:23:28.902 [2024-10-01 06:15:54.258983] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a1b60 is same with the state(6) to be set 00:23:28.902 [2024-10-01 06:15:54.258991] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a1b60 is same with the state(6) to be set 00:23:28.902 [2024-10-01 06:15:54.258998] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a1b60 is same with the state(6) to be set 00:23:28.902 [2024-10-01 06:15:54.259006] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a1b60 is same with the state(6) to be set 00:23:28.902 [2024-10-01 06:15:54.259014] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a1b60 is same with the state(6) to be set 00:23:28.902 [2024-10-01 06:15:54.259022] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a1b60 is same with the state(6) to be set 00:23:28.902 [2024-10-01 06:15:54.259029] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a1b60 is same with the state(6) to be set 00:23:28.902 [2024-10-01 06:15:54.259037] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a1b60 is same with the state(6) to be set 00:23:28.902 [2024-10-01 06:15:54.259045] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a1b60 is same with the state(6) to be set 00:23:28.902 [2024-10-01 06:15:54.259054] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a1b60 is same with the state(6) to be set 00:23:28.902 [2024-10-01 06:15:54.259062] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a1b60 is same with the state(6) to be set 00:23:28.902 [2024-10-01 06:15:54.259070] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a1b60 is same with the state(6) to be set 00:23:28.902 [2024-10-01 06:15:54.259078] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a1b60 is same with the state(6) to be set 00:23:28.902 [2024-10-01 06:15:54.259086] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a1b60 is same with the state(6) to be set 00:23:28.902 [2024-10-01 06:15:54.259094] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a1b60 is same with the state(6) to be set 00:23:28.902 [2024-10-01 06:15:54.259102] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a1b60 is same with the state(6) to be set 00:23:28.902 [2024-10-01 06:15:54.259110] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a1b60 is same with the state(6) to be set 00:23:28.902 [2024-10-01 06:15:54.259118] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a1b60 is same with the state(6) to be set 00:23:28.902 [2024-10-01 06:15:54.259125] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a1b60 is same with the state(6) to be set 00:23:28.902 [2024-10-01 06:15:54.259133] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a1b60 is same with the state(6) to be set 00:23:28.902 [2024-10-01 06:15:54.259141] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a1b60 is same with the state(6) to be set 00:23:28.902 [2024-10-01 06:15:54.259149] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a1b60 is same with the state(6) to be set 00:23:28.902 [2024-10-01 06:15:54.259158] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a1b60 is same with the state(6) to be set 00:23:28.902 [2024-10-01 06:15:54.259166] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a1b60 is same with the state(6) to be set 00:23:28.902 [2024-10-01 06:15:54.259174] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a1b60 is same with the state(6) to be set 00:23:28.902 [2024-10-01 06:15:54.259182] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a1b60 is same with the state(6) to be set 00:23:28.902 [2024-10-01 06:15:54.259190] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a1b60 is same with the state(6) to be set 00:23:28.902 [2024-10-01 06:15:54.259198] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a1b60 is same with the state(6) to be set 00:23:28.902 [2024-10-01 06:15:54.259205] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a1b60 is same with the state(6) to be set 00:23:28.902 [2024-10-01 06:15:54.259213] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a1b60 is same with the state(6) to be set 00:23:28.902 [2024-10-01 06:15:54.259221] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a1b60 is same with the state(6) to be set 00:23:28.902 [2024-10-01 06:15:54.259229] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a1b60 is same with the state(6) to be set 00:23:28.902 [2024-10-01 06:15:54.259237] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a1b60 is same with the state(6) to be set 00:23:28.902 [2024-10-01 06:15:54.259245] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a1b60 is same with the state(6) to be set 00:23:28.902 [2024-10-01 06:15:54.259253] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a1b60 is same with the state(6) to be set 00:23:28.902 [2024-10-01 06:15:54.259261] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a1b60 is same with the state(6) to be set 00:23:28.902 [2024-10-01 06:15:54.259268] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a1b60 is same with the state(6) to be set 00:23:28.902 [2024-10-01 06:15:54.259276] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a1b60 is same with the state(6) to be set 00:23:28.902 [2024-10-01 06:15:54.259284] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a1b60 is same with the state(6) to be set 00:23:28.902 [2024-10-01 06:15:54.259292] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a1b60 is same with the state(6) to be set 00:23:28.902 [2024-10-01 06:15:54.259300] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a1b60 is same with the state(6) to be set 00:23:28.902 [2024-10-01 06:15:54.259308] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a1b60 is same with the state(6) to be set 00:23:28.902 [2024-10-01 06:15:54.259317] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a1b60 is same with the state(6) to be set 00:23:28.902 [2024-10-01 06:15:54.259325] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a1b60 is same with the state(6) to be set 00:23:28.902 [2024-10-01 06:15:54.259332] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a1b60 is same with the state(6) to be set 00:23:28.902 [2024-10-01 06:15:54.259340] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a1b60 is same with the state(6) to be set 00:23:28.902 [2024-10-01 06:15:54.259348] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a1b60 is same with the state(6) to be set 00:23:28.902 [2024-10-01 06:15:54.259371] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a1b60 is same with the state(6) to be set 00:23:28.902 [2024-10-01 06:15:54.259380] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a1b60 is same with the state(6) to be set 00:23:28.902 [2024-10-01 06:15:54.259388] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a1b60 is same with the state(6) to be set 00:23:28.902 [2024-10-01 06:15:54.259396] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a1b60 is same with the state(6) to be set 00:23:28.902 [2024-10-01 06:15:54.259404] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a1b60 is same with the state(6) to be set 00:23:28.902 [2024-10-01 06:15:54.259412] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a1b60 is same with the state(6) to be set 00:23:28.902 [2024-10-01 06:15:54.259420] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a1b60 is same with the state(6) to be set 00:23:28.902 [2024-10-01 06:15:54.259428] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a1b60 is same with the state(6) to be set 00:23:28.902 [2024-10-01 06:15:54.259437] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a1b60 is same with the state(6) to be set 00:23:28.902 [2024-10-01 06:15:54.259445] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a1b60 is same with the state(6) to be set 00:23:28.902 [2024-10-01 06:15:54.259453] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a1b60 is same with the state(6) to be set 00:23:28.902 [2024-10-01 06:15:54.259461] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a1b60 is same with the state(6) to be set 00:23:28.902 [2024-10-01 06:15:54.259469] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a1b60 is same with the state(6) to be set 00:23:28.902 [2024-10-01 06:15:54.259477] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a1b60 is same with the state(6) to be set 00:23:28.902 [2024-10-01 06:15:54.259485] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a1b60 is same with the state(6) to be set 00:23:28.902 [2024-10-01 06:15:54.259493] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a1b60 is same with the state(6) to be set 00:23:28.902 [2024-10-01 06:15:54.259501] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a1b60 is same with the state(6) to be set 00:23:28.902 [2024-10-01 06:15:54.259509] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a1b60 is same with the state(6) to be set 00:23:28.902 [2024-10-01 06:15:54.259517] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a1b60 is same with the state(6) to be set 00:23:28.902 [2024-10-01 06:15:54.259525] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a1b60 is same with the state(6) to be set 00:23:28.902 [2024-10-01 06:15:54.259533] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a1b60 is same with the state(6) to be set 00:23:28.902 [2024-10-01 06:15:54.259591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:113128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.902 [2024-10-01 06:15:54.259612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.902 [2024-10-01 06:15:54.259632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:117776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.902 [2024-10-01 06:15:54.259642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.902 [2024-10-01 06:15:54.259655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:86064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.903 [2024-10-01 06:15:54.259665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.903 [2024-10-01 06:15:54.259676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:85040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.903 [2024-10-01 06:15:54.259686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.903 [2024-10-01 06:15:54.259698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:52088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.903 [2024-10-01 06:15:54.259707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.903 [2024-10-01 06:15:54.259719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:116792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.903 [2024-10-01 06:15:54.259728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.903 [2024-10-01 06:15:54.259754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:92352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.903 [2024-10-01 06:15:54.259763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.903 [2024-10-01 06:15:54.259774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:102888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.903 [2024-10-01 06:15:54.259784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.903 [2024-10-01 06:15:54.259795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.903 [2024-10-01 06:15:54.259805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.903 [2024-10-01 06:15:54.259816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:27112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.903 [2024-10-01 06:15:54.259825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.903 [2024-10-01 06:15:54.259836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:49000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.903 [2024-10-01 06:15:54.259846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.903 [2024-10-01 06:15:54.259857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:90584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.903 [2024-10-01 06:15:54.259866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.903 [2024-10-01 06:15:54.259878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:69880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.903 [2024-10-01 06:15:54.259887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.903 [2024-10-01 06:15:54.259898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:63280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.903 [2024-10-01 06:15:54.259945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.903 [2024-10-01 06:15:54.259960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:10672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.903 [2024-10-01 06:15:54.259970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.903 [2024-10-01 06:15:54.259982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:66992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.903 [2024-10-01 06:15:54.259992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.903 [2024-10-01 06:15:54.260005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:17352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.903 [2024-10-01 06:15:54.260015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.903 [2024-10-01 06:15:54.260026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:70072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.903 [2024-10-01 06:15:54.260036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.903 [2024-10-01 06:15:54.260047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:64632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.903 [2024-10-01 06:15:54.260057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.903 [2024-10-01 06:15:54.260068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:92904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.903 [2024-10-01 06:15:54.260078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.903 [2024-10-01 06:15:54.260089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:107112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.903 [2024-10-01 06:15:54.260099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.903 [2024-10-01 06:15:54.260110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:18056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.903 [2024-10-01 06:15:54.260120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.903 [2024-10-01 06:15:54.260131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:23616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.903 [2024-10-01 06:15:54.260142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.903 [2024-10-01 06:15:54.260154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:53816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.903 [2024-10-01 06:15:54.260164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.903 [2024-10-01 06:15:54.260175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:75536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.903 [2024-10-01 06:15:54.260185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.903 [2024-10-01 06:15:54.260197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:130616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.903 [2024-10-01 06:15:54.260207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.903 [2024-10-01 06:15:54.260218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:77048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.903 [2024-10-01 06:15:54.260228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.903 [2024-10-01 06:15:54.260239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:114696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.903 [2024-10-01 06:15:54.260249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.903 [2024-10-01 06:15:54.260260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:62320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.903 [2024-10-01 06:15:54.260270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.903 [2024-10-01 06:15:54.260281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:11480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.903 [2024-10-01 06:15:54.260291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.903 [2024-10-01 06:15:54.260302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:33288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.903 [2024-10-01 06:15:54.260327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.903 [2024-10-01 06:15:54.260338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:85016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.903 [2024-10-01 06:15:54.260347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.903 [2024-10-01 06:15:54.260358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:57528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.903 [2024-10-01 06:15:54.260367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.903 [2024-10-01 06:15:54.260379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:115200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.903 [2024-10-01 06:15:54.260389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.903 [2024-10-01 06:15:54.260400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:85616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.903 [2024-10-01 06:15:54.260409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.903 [2024-10-01 06:15:54.260420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:78296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.903 [2024-10-01 06:15:54.260430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.903 [2024-10-01 06:15:54.260441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:94120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.903 [2024-10-01 06:15:54.260455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.903 [2024-10-01 06:15:54.260466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:63680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.903 [2024-10-01 06:15:54.260476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.903 [2024-10-01 06:15:54.260486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:54448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.903 [2024-10-01 06:15:54.260496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.903 [2024-10-01 06:15:54.260507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:49160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.903 [2024-10-01 06:15:54.260517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.903 [2024-10-01 06:15:54.260528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:14440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.903 [2024-10-01 06:15:54.260538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.903 [2024-10-01 06:15:54.260549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:67504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.903 [2024-10-01 06:15:54.260558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.903 [2024-10-01 06:15:54.260570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:5800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.903 [2024-10-01 06:15:54.260579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.904 [2024-10-01 06:15:54.260590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:18680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.904 [2024-10-01 06:15:54.260600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.904 [2024-10-01 06:15:54.260611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:113328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.904 [2024-10-01 06:15:54.260621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.904 [2024-10-01 06:15:54.260632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:94424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.904 [2024-10-01 06:15:54.260641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.904 [2024-10-01 06:15:54.260652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:9680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.904 [2024-10-01 06:15:54.260662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.904 [2024-10-01 06:15:54.260673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:4744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.904 [2024-10-01 06:15:54.260682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.904 [2024-10-01 06:15:54.260694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:110008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.904 [2024-10-01 06:15:54.260703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.904 [2024-10-01 06:15:54.260714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:4576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.904 [2024-10-01 06:15:54.260724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.904 [2024-10-01 06:15:54.260735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:13088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.904 [2024-10-01 06:15:54.260744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.904 [2024-10-01 06:15:54.260755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:67080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.904 [2024-10-01 06:15:54.260764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.904 [2024-10-01 06:15:54.260775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:12336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.904 [2024-10-01 06:15:54.260785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.904 [2024-10-01 06:15:54.260795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:13264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.904 [2024-10-01 06:15:54.260805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.904 [2024-10-01 06:15:54.260816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:65288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.904 [2024-10-01 06:15:54.260826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.904 [2024-10-01 06:15:54.260837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:106576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.904 [2024-10-01 06:15:54.260846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.904 [2024-10-01 06:15:54.260857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:104528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.904 [2024-10-01 06:15:54.260866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.904 [2024-10-01 06:15:54.260877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:80704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.904 [2024-10-01 06:15:54.260887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.904 [2024-10-01 06:15:54.260898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:117128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.904 [2024-10-01 06:15:54.260907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.904 [2024-10-01 06:15:54.260927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:25000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.904 [2024-10-01 06:15:54.260937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.904 [2024-10-01 06:15:54.260948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:115656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.904 [2024-10-01 06:15:54.260958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.904 [2024-10-01 06:15:54.260969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:79936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.904 [2024-10-01 06:15:54.260978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.904 [2024-10-01 06:15:54.260989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:98824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.904 [2024-10-01 06:15:54.260999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.904 [2024-10-01 06:15:54.261010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:130544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.904 [2024-10-01 06:15:54.261019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.904 [2024-10-01 06:15:54.261032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:42952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.904 [2024-10-01 06:15:54.261041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.904 [2024-10-01 06:15:54.261052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:95840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.904 [2024-10-01 06:15:54.261062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.904 [2024-10-01 06:15:54.261073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:96120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.904 [2024-10-01 06:15:54.261083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.904 [2024-10-01 06:15:54.261093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:35952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.904 [2024-10-01 06:15:54.261103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.904 [2024-10-01 06:15:54.261114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:24848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.904 [2024-10-01 06:15:54.261123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.904 [2024-10-01 06:15:54.261134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:93384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.904 [2024-10-01 06:15:54.261143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.904 [2024-10-01 06:15:54.261154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:34904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.904 [2024-10-01 06:15:54.261171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.904 [2024-10-01 06:15:54.261183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:96280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.904 [2024-10-01 06:15:54.261193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.904 [2024-10-01 06:15:54.261203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:66504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.904 [2024-10-01 06:15:54.261213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.904 [2024-10-01 06:15:54.261224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:60912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.904 [2024-10-01 06:15:54.261233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.904 [2024-10-01 06:15:54.261244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:119448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.904 [2024-10-01 06:15:54.261253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.904 [2024-10-01 06:15:54.261264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:36624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.904 [2024-10-01 06:15:54.261274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.904 [2024-10-01 06:15:54.261285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:52128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.904 [2024-10-01 06:15:54.261295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.904 [2024-10-01 06:15:54.261305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:79864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.904 [2024-10-01 06:15:54.261315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.904 [2024-10-01 06:15:54.261326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:33704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.904 [2024-10-01 06:15:54.261335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.904 [2024-10-01 06:15:54.261346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:37720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.904 [2024-10-01 06:15:54.261356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.904 [2024-10-01 06:15:54.261369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:109544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.904 [2024-10-01 06:15:54.261379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.904 [2024-10-01 06:15:54.261390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:1152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.904 [2024-10-01 06:15:54.261400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.904 [2024-10-01 06:15:54.261410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:100488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.905 [2024-10-01 06:15:54.261420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.905 [2024-10-01 06:15:54.261431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:18976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.905 [2024-10-01 06:15:54.261440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.905 [2024-10-01 06:15:54.261451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:89904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.905 [2024-10-01 06:15:54.261460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.905 [2024-10-01 06:15:54.261471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:71824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.905 [2024-10-01 06:15:54.261481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.905 [2024-10-01 06:15:54.261492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:122264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.905 [2024-10-01 06:15:54.261503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.905 [2024-10-01 06:15:54.261514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:126040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.905 [2024-10-01 06:15:54.261523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.905 [2024-10-01 06:15:54.261534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:30664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.905 [2024-10-01 06:15:54.261544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.905 [2024-10-01 06:15:54.261554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:111808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.905 [2024-10-01 06:15:54.261580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.905 [2024-10-01 06:15:54.261591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:48856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.905 [2024-10-01 06:15:54.261601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.905 [2024-10-01 06:15:54.261612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:22080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.905 [2024-10-01 06:15:54.261622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.905 [2024-10-01 06:15:54.261634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:61272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.905 [2024-10-01 06:15:54.261643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.905 [2024-10-01 06:15:54.261655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:107808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.905 [2024-10-01 06:15:54.261665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.905 [2024-10-01 06:15:54.261676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:63424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.905 [2024-10-01 06:15:54.261686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.905 [2024-10-01 06:15:54.261697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:128944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.905 [2024-10-01 06:15:54.261706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.905 [2024-10-01 06:15:54.261720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:38664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.905 [2024-10-01 06:15:54.261730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.905 [2024-10-01 06:15:54.261741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:28992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.905 [2024-10-01 06:15:54.261751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.905 [2024-10-01 06:15:54.261763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:127800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.905 [2024-10-01 06:15:54.261772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.905 [2024-10-01 06:15:54.261784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:99344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.905 [2024-10-01 06:15:54.261794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.905 [2024-10-01 06:15:54.261805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:108456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.905 [2024-10-01 06:15:54.261815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.905 [2024-10-01 06:15:54.261826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:76864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.905 [2024-10-01 06:15:54.261836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.905 [2024-10-01 06:15:54.261847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:52152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.905 [2024-10-01 06:15:54.261858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.905 [2024-10-01 06:15:54.261870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:90648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.905 [2024-10-01 06:15:54.261879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.905 [2024-10-01 06:15:54.261891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:4440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.905 [2024-10-01 06:15:54.261901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.905 [2024-10-01 06:15:54.261921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:9384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.905 [2024-10-01 06:15:54.261932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.905 [2024-10-01 06:15:54.261944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:6984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.905 [2024-10-01 06:15:54.261953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.905 [2024-10-01 06:15:54.261966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:7216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.905 [2024-10-01 06:15:54.261976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.905 [2024-10-01 06:15:54.261987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:11216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.905 [2024-10-01 06:15:54.261997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.905 [2024-10-01 06:15:54.262008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:29848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.905 [2024-10-01 06:15:54.262017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.905 [2024-10-01 06:15:54.262029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:84800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.905 [2024-10-01 06:15:54.262038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.905 [2024-10-01 06:15:54.262050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:28696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.905 [2024-10-01 06:15:54.262059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.905 [2024-10-01 06:15:54.262073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:116296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.905 [2024-10-01 06:15:54.262083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.905 [2024-10-01 06:15:54.262096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:119192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.905 [2024-10-01 06:15:54.262106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.905 [2024-10-01 06:15:54.262117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:91288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.905 [2024-10-01 06:15:54.262126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.905 [2024-10-01 06:15:54.262138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:40760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.905 [2024-10-01 06:15:54.262147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.906 [2024-10-01 06:15:54.262158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:55152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.906 [2024-10-01 06:15:54.262168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.906 [2024-10-01 06:15:54.262180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:113864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.906 [2024-10-01 06:15:54.262189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.906 [2024-10-01 06:15:54.262200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:81880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.906 [2024-10-01 06:15:54.262212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.906 [2024-10-01 06:15:54.262223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:106072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.906 [2024-10-01 06:15:54.262233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.906 [2024-10-01 06:15:54.262244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:118128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.906 [2024-10-01 06:15:54.262254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.906 [2024-10-01 06:15:54.262265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:111280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.906 [2024-10-01 06:15:54.262275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.906 [2024-10-01 06:15:54.262286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:128224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.906 [2024-10-01 06:15:54.262296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.906 [2024-10-01 06:15:54.262308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:116176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.906 [2024-10-01 06:15:54.262332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.906 [2024-10-01 06:15:54.262354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:32280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.906 [2024-10-01 06:15:54.262363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.906 [2024-10-01 06:15:54.262374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:105328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.906 [2024-10-01 06:15:54.262384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.906 [2024-10-01 06:15:54.262394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:76584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:28.906 [2024-10-01 06:15:54.262404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.906 [2024-10-01 06:15:54.262414] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c50810 is same with the state(6) to be set 00:23:28.906 [2024-10-01 06:15:54.262426] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:28.906 [2024-10-01 06:15:54.262436] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:28.906 [2024-10-01 06:15:54.262444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:106872 len:8 PRP1 0x0 PRP2 0x0 00:23:28.906 [2024-10-01 06:15:54.262453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:28.906 [2024-10-01 06:15:54.262494] bdev_nvme.c:1730:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1c50810 was disconnected and freed. reset controller. 00:23:28.906 [2024-10-01 06:15:54.262750] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:28.906 [2024-10-01 06:15:54.262799] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2f650 (9): Bad file descriptor 00:23:28.906 [2024-10-01 06:15:54.262924] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:23:28.906 [2024-10-01 06:15:54.262950] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c2f650 with addr=10.0.0.3, port=4420 00:23:28.906 [2024-10-01 06:15:54.262961] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2f650 is same with the state(6) to be set 00:23:28.906 [2024-10-01 06:15:54.262981] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2f650 (9): Bad file descriptor 00:23:28.906 [2024-10-01 06:15:54.262997] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:28.906 [2024-10-01 06:15:54.263006] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:28.906 [2024-10-01 06:15:54.263017] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:28.906 [2024-10-01 06:15:54.263041] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:28.906 [2024-10-01 06:15:54.263053] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:28.906 06:15:54 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@128 -- # wait 96655 00:23:30.851 9272.00 IOPS, 36.22 MiB/s 6181.33 IOPS, 24.15 MiB/s [2024-10-01 06:15:56.263275] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:23:30.852 [2024-10-01 06:15:56.263361] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c2f650 with addr=10.0.0.3, port=4420 00:23:30.852 [2024-10-01 06:15:56.263391] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2f650 is same with the state(6) to be set 00:23:30.852 [2024-10-01 06:15:56.263427] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2f650 (9): Bad file descriptor 00:23:30.852 [2024-10-01 06:15:56.263444] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:30.852 [2024-10-01 06:15:56.263454] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:30.852 [2024-10-01 06:15:56.263464] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:30.852 [2024-10-01 06:15:56.263487] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:30.852 [2024-10-01 06:15:56.263499] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:32.725 4636.00 IOPS, 18.11 MiB/s 3708.80 IOPS, 14.49 MiB/s [2024-10-01 06:15:58.263639] uring.c: 665:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:23:32.725 [2024-10-01 06:15:58.263723] nvme_tcp.c:2399:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c2f650 with addr=10.0.0.3, port=4420 00:23:32.725 [2024-10-01 06:15:58.263739] nvme_tcp.c: 337:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2f650 is same with the state(6) to be set 00:23:32.725 [2024-10-01 06:15:58.263761] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2f650 (9): Bad file descriptor 00:23:32.725 [2024-10-01 06:15:58.263778] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:32.725 [2024-10-01 06:15:58.263787] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:32.725 [2024-10-01 06:15:58.263797] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:32.725 [2024-10-01 06:15:58.263821] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:32.725 [2024-10-01 06:15:58.263833] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:34.864 3090.67 IOPS, 12.07 MiB/s 2649.14 IOPS, 10.35 MiB/s [2024-10-01 06:16:00.264043] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:34.864 [2024-10-01 06:16:00.264552] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:34.864 [2024-10-01 06:16:00.264660] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:34.864 [2024-10-01 06:16:00.264753] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:23:34.864 [2024-10-01 06:16:00.264853] bdev_nvme.c:2181:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:35.772 2318.00 IOPS, 9.05 MiB/s 00:23:35.772 Latency(us) 00:23:35.772 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:35.772 Job: NVMe0n1 (Core Mask 0x4, workload: randread, depth: 128, IO size: 4096) 00:23:35.772 NVMe0n1 : 8.20 2262.34 8.84 15.62 0.00 56091.96 7417.48 7015926.69 00:23:35.772 =================================================================================================================== 00:23:35.772 Total : 2262.34 8.84 15.62 0.00 56091.96 7417.48 7015926.69 00:23:35.772 { 00:23:35.772 "results": [ 00:23:35.772 { 00:23:35.772 "job": "NVMe0n1", 00:23:35.772 "core_mask": "0x4", 00:23:35.772 "workload": "randread", 00:23:35.772 "status": "finished", 00:23:35.772 "queue_depth": 128, 00:23:35.772 "io_size": 4096, 00:23:35.772 "runtime": 8.196817, 00:23:35.772 "iops": 2262.341589424285, 00:23:35.772 "mibps": 8.837271833688613, 00:23:35.772 "io_failed": 128, 00:23:35.772 "io_timeout": 0, 00:23:35.772 "avg_latency_us": 56091.95997818805, 00:23:35.772 "min_latency_us": 7417.483636363636, 00:23:35.772 "max_latency_us": 7015926.69090909 00:23:35.772 } 00:23:35.772 ], 00:23:35.772 "core_count": 1 00:23:35.772 } 00:23:35.772 06:16:01 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@129 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:35.772 Attaching 5 probes... 00:23:35.772 1366.988671: reset bdev controller NVMe0 00:23:35.772 1367.082222: reconnect bdev controller NVMe0 00:23:35.772 3367.404271: reconnect delay bdev controller NVMe0 00:23:35.772 3367.438516: reconnect bdev controller NVMe0 00:23:35.772 5367.770449: reconnect delay bdev controller NVMe0 00:23:35.772 5367.788605: reconnect bdev controller NVMe0 00:23:35.772 7368.268455: reconnect delay bdev controller NVMe0 00:23:35.772 7368.287890: reconnect bdev controller NVMe0 00:23:35.772 06:16:01 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@132 -- # grep -c 'reconnect delay bdev controller NVMe0' 00:23:35.772 06:16:01 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@132 -- # (( 3 <= 2 )) 00:23:35.772 06:16:01 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@136 -- # kill 96615 00:23:35.772 06:16:01 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@137 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:35.772 06:16:01 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@139 -- # killprocess 96606 00:23:35.772 06:16:01 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@950 -- # '[' -z 96606 ']' 00:23:35.772 06:16:01 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # kill -0 96606 00:23:35.772 06:16:01 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@955 -- # uname 00:23:35.772 06:16:01 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:35.772 06:16:01 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 96606 00:23:35.772 killing process with pid 96606 00:23:35.772 Received shutdown signal, test time was about 8.268651 seconds 00:23:35.772 00:23:35.772 Latency(us) 00:23:35.772 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:35.772 =================================================================================================================== 00:23:35.772 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:35.772 06:16:01 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:23:35.772 06:16:01 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:23:35.772 06:16:01 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@968 -- # echo 'killing process with pid 96606' 00:23:35.772 06:16:01 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@969 -- # kill 96606 00:23:35.772 06:16:01 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@974 -- # wait 96606 00:23:36.031 06:16:01 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@141 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:36.291 06:16:01 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@143 -- # trap - SIGINT SIGTERM EXIT 00:23:36.291 06:16:01 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@145 -- # nvmftestfini 00:23:36.291 06:16:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@512 -- # nvmfcleanup 00:23:36.291 06:16:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@121 -- # sync 00:23:36.291 06:16:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:36.291 06:16:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@124 -- # set +e 00:23:36.291 06:16:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:36.291 06:16:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:36.291 rmmod nvme_tcp 00:23:36.291 rmmod nvme_fabrics 00:23:36.291 rmmod nvme_keyring 00:23:36.291 06:16:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:36.291 06:16:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@128 -- # set -e 00:23:36.291 06:16:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@129 -- # return 0 00:23:36.291 06:16:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@513 -- # '[' -n 96196 ']' 00:23:36.291 06:16:01 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@514 -- # killprocess 96196 00:23:36.291 06:16:01 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@950 -- # '[' -z 96196 ']' 00:23:36.291 06:16:01 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # kill -0 96196 00:23:36.291 06:16:01 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@955 -- # uname 00:23:36.291 06:16:01 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:36.291 06:16:01 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 96196 00:23:36.291 killing process with pid 96196 00:23:36.291 06:16:01 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:36.291 06:16:01 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:36.291 06:16:01 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@968 -- # echo 'killing process with pid 96196' 00:23:36.291 06:16:01 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@969 -- # kill 96196 00:23:36.291 06:16:01 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@974 -- # wait 96196 00:23:36.550 06:16:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@516 -- # '[' '' == iso ']' 00:23:36.551 06:16:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:23:36.551 06:16:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:23:36.551 06:16:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@297 -- # iptr 00:23:36.551 06:16:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@787 -- # iptables-save 00:23:36.551 06:16:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:23:36.551 06:16:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@787 -- # iptables-restore 00:23:36.551 06:16:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:36.551 06:16:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:23:36.551 06:16:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:23:36.551 06:16:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:23:36.551 06:16:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:23:36.551 06:16:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:23:36.551 06:16:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:23:36.551 06:16:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:23:36.551 06:16:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:23:36.551 06:16:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:23:36.551 06:16:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:23:36.810 06:16:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:23:36.810 06:16:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:23:36.810 06:16:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:36.810 06:16:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:36.810 06:16:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@246 -- # remove_spdk_ns 00:23:36.810 06:16:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:36.810 06:16:02 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:36.810 06:16:02 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:36.810 06:16:02 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@300 -- # return 0 00:23:36.810 00:23:36.810 real 0m44.655s 00:23:36.810 user 2m10.628s 00:23:36.810 sys 0m5.518s 00:23:36.810 06:16:02 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:36.810 06:16:02 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:23:36.810 ************************************ 00:23:36.810 END TEST nvmf_timeout 00:23:36.810 ************************************ 00:23:36.810 06:16:02 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ virt == phy ]] 00:23:36.810 06:16:02 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:23:36.810 ************************************ 00:23:36.810 END TEST nvmf_host 00:23:36.810 ************************************ 00:23:36.810 00:23:36.810 real 5m41.031s 00:23:36.810 user 15m58.719s 00:23:36.810 sys 1m15.935s 00:23:36.810 06:16:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:36.810 06:16:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:36.810 06:16:02 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:23:36.810 06:16:02 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 1 -eq 0 ]] 00:23:36.810 ************************************ 00:23:36.810 END TEST nvmf_tcp 00:23:36.810 ************************************ 00:23:36.810 00:23:36.810 real 14m53.476s 00:23:36.810 user 39m12.351s 00:23:36.810 sys 4m3.687s 00:23:36.810 06:16:02 nvmf_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:36.810 06:16:02 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:37.070 06:16:02 -- spdk/autotest.sh@281 -- # [[ 1 -eq 0 ]] 00:23:37.070 06:16:02 -- spdk/autotest.sh@285 -- # run_test nvmf_dif /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:23:37.070 06:16:02 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:23:37.070 06:16:02 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:37.070 06:16:02 -- common/autotest_common.sh@10 -- # set +x 00:23:37.070 ************************************ 00:23:37.070 START TEST nvmf_dif 00:23:37.070 ************************************ 00:23:37.070 06:16:02 nvmf_dif -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:23:37.070 * Looking for test storage... 00:23:37.070 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:23:37.070 06:16:02 nvmf_dif -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:23:37.070 06:16:02 nvmf_dif -- common/autotest_common.sh@1681 -- # lcov --version 00:23:37.070 06:16:02 nvmf_dif -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:23:37.070 06:16:02 nvmf_dif -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:23:37.070 06:16:02 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:37.070 06:16:02 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:37.070 06:16:02 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:37.070 06:16:02 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:23:37.070 06:16:02 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:23:37.070 06:16:02 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:23:37.070 06:16:02 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:23:37.070 06:16:02 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:23:37.070 06:16:02 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:23:37.070 06:16:02 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:23:37.070 06:16:02 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:37.070 06:16:02 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:23:37.070 06:16:02 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:23:37.070 06:16:02 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:37.070 06:16:02 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:37.070 06:16:02 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:23:37.070 06:16:02 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:23:37.070 06:16:02 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:37.070 06:16:02 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:23:37.070 06:16:02 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:23:37.070 06:16:02 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:23:37.070 06:16:02 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:23:37.070 06:16:02 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:37.070 06:16:02 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:23:37.070 06:16:02 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:23:37.070 06:16:02 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:37.070 06:16:02 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:37.070 06:16:02 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:23:37.070 06:16:02 nvmf_dif -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:37.070 06:16:02 nvmf_dif -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:23:37.070 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:37.070 --rc genhtml_branch_coverage=1 00:23:37.070 --rc genhtml_function_coverage=1 00:23:37.070 --rc genhtml_legend=1 00:23:37.070 --rc geninfo_all_blocks=1 00:23:37.070 --rc geninfo_unexecuted_blocks=1 00:23:37.070 00:23:37.070 ' 00:23:37.070 06:16:02 nvmf_dif -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:23:37.070 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:37.070 --rc genhtml_branch_coverage=1 00:23:37.070 --rc genhtml_function_coverage=1 00:23:37.070 --rc genhtml_legend=1 00:23:37.070 --rc geninfo_all_blocks=1 00:23:37.070 --rc geninfo_unexecuted_blocks=1 00:23:37.070 00:23:37.070 ' 00:23:37.070 06:16:02 nvmf_dif -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:23:37.070 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:37.070 --rc genhtml_branch_coverage=1 00:23:37.071 --rc genhtml_function_coverage=1 00:23:37.071 --rc genhtml_legend=1 00:23:37.071 --rc geninfo_all_blocks=1 00:23:37.071 --rc geninfo_unexecuted_blocks=1 00:23:37.071 00:23:37.071 ' 00:23:37.071 06:16:02 nvmf_dif -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:23:37.071 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:37.071 --rc genhtml_branch_coverage=1 00:23:37.071 --rc genhtml_function_coverage=1 00:23:37.071 --rc genhtml_legend=1 00:23:37.071 --rc geninfo_all_blocks=1 00:23:37.071 --rc geninfo_unexecuted_blocks=1 00:23:37.071 00:23:37.071 ' 00:23:37.071 06:16:02 nvmf_dif -- target/dif.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:37.071 06:16:02 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:23:37.071 06:16:02 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:37.071 06:16:02 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:37.071 06:16:02 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:37.071 06:16:02 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:37.071 06:16:02 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:37.071 06:16:02 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:37.071 06:16:02 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:37.071 06:16:02 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:37.071 06:16:02 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:37.071 06:16:02 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:37.071 06:16:02 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 00:23:37.071 06:16:02 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=a979a798-a221-4879-b3c4-5aaa753fde06 00:23:37.071 06:16:02 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:37.071 06:16:02 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:37.071 06:16:02 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:37.071 06:16:02 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:37.071 06:16:02 nvmf_dif -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:37.071 06:16:02 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:23:37.071 06:16:02 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:37.071 06:16:02 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:37.071 06:16:02 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:37.071 06:16:02 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:37.071 06:16:02 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:37.071 06:16:02 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:37.071 06:16:02 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:23:37.071 06:16:02 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:37.071 06:16:02 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:23:37.071 06:16:02 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:37.071 06:16:02 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:37.071 06:16:02 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:37.071 06:16:02 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:37.071 06:16:02 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:37.071 06:16:02 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:37.071 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:37.071 06:16:02 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:37.071 06:16:02 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:37.071 06:16:02 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:37.071 06:16:02 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:23:37.071 06:16:02 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:23:37.071 06:16:02 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:23:37.071 06:16:02 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:23:37.071 06:16:02 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:23:37.071 06:16:02 nvmf_dif -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:23:37.071 06:16:02 nvmf_dif -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:37.071 06:16:02 nvmf_dif -- nvmf/common.sh@472 -- # prepare_net_devs 00:23:37.071 06:16:02 nvmf_dif -- nvmf/common.sh@434 -- # local -g is_hw=no 00:23:37.071 06:16:02 nvmf_dif -- nvmf/common.sh@436 -- # remove_spdk_ns 00:23:37.071 06:16:02 nvmf_dif -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:37.071 06:16:02 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:23:37.071 06:16:02 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:37.071 06:16:02 nvmf_dif -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:23:37.071 06:16:02 nvmf_dif -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:23:37.071 06:16:02 nvmf_dif -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:23:37.071 06:16:02 nvmf_dif -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:23:37.071 06:16:02 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:23:37.071 06:16:02 nvmf_dif -- nvmf/common.sh@456 -- # nvmf_veth_init 00:23:37.071 06:16:02 nvmf_dif -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:37.071 06:16:02 nvmf_dif -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:23:37.071 06:16:02 nvmf_dif -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:23:37.071 06:16:02 nvmf_dif -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:23:37.071 06:16:02 nvmf_dif -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:37.071 06:16:02 nvmf_dif -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:23:37.071 06:16:02 nvmf_dif -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:37.071 06:16:02 nvmf_dif -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:23:37.071 06:16:02 nvmf_dif -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:37.071 06:16:02 nvmf_dif -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:23:37.071 06:16:02 nvmf_dif -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:37.071 06:16:02 nvmf_dif -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:37.071 06:16:02 nvmf_dif -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:37.071 06:16:02 nvmf_dif -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:37.071 06:16:02 nvmf_dif -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:37.071 06:16:02 nvmf_dif -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:37.071 06:16:02 nvmf_dif -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:23:37.331 Cannot find device "nvmf_init_br" 00:23:37.331 06:16:02 nvmf_dif -- nvmf/common.sh@162 -- # true 00:23:37.331 06:16:02 nvmf_dif -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:23:37.331 Cannot find device "nvmf_init_br2" 00:23:37.331 06:16:02 nvmf_dif -- nvmf/common.sh@163 -- # true 00:23:37.331 06:16:02 nvmf_dif -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:23:37.331 Cannot find device "nvmf_tgt_br" 00:23:37.331 06:16:02 nvmf_dif -- nvmf/common.sh@164 -- # true 00:23:37.331 06:16:02 nvmf_dif -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:23:37.331 Cannot find device "nvmf_tgt_br2" 00:23:37.331 06:16:02 nvmf_dif -- nvmf/common.sh@165 -- # true 00:23:37.331 06:16:02 nvmf_dif -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:23:37.331 Cannot find device "nvmf_init_br" 00:23:37.331 06:16:02 nvmf_dif -- nvmf/common.sh@166 -- # true 00:23:37.331 06:16:02 nvmf_dif -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:23:37.331 Cannot find device "nvmf_init_br2" 00:23:37.331 06:16:02 nvmf_dif -- nvmf/common.sh@167 -- # true 00:23:37.331 06:16:02 nvmf_dif -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:23:37.331 Cannot find device "nvmf_tgt_br" 00:23:37.331 06:16:02 nvmf_dif -- nvmf/common.sh@168 -- # true 00:23:37.331 06:16:02 nvmf_dif -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:23:37.331 Cannot find device "nvmf_tgt_br2" 00:23:37.331 06:16:02 nvmf_dif -- nvmf/common.sh@169 -- # true 00:23:37.331 06:16:02 nvmf_dif -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:23:37.331 Cannot find device "nvmf_br" 00:23:37.331 06:16:02 nvmf_dif -- nvmf/common.sh@170 -- # true 00:23:37.331 06:16:02 nvmf_dif -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:23:37.331 Cannot find device "nvmf_init_if" 00:23:37.331 06:16:02 nvmf_dif -- nvmf/common.sh@171 -- # true 00:23:37.331 06:16:02 nvmf_dif -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:23:37.331 Cannot find device "nvmf_init_if2" 00:23:37.331 06:16:02 nvmf_dif -- nvmf/common.sh@172 -- # true 00:23:37.331 06:16:02 nvmf_dif -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:37.331 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:37.331 06:16:02 nvmf_dif -- nvmf/common.sh@173 -- # true 00:23:37.331 06:16:02 nvmf_dif -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:37.331 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:37.331 06:16:02 nvmf_dif -- nvmf/common.sh@174 -- # true 00:23:37.331 06:16:02 nvmf_dif -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:23:37.332 06:16:02 nvmf_dif -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:37.332 06:16:02 nvmf_dif -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:23:37.332 06:16:02 nvmf_dif -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:37.332 06:16:02 nvmf_dif -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:37.332 06:16:02 nvmf_dif -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:37.332 06:16:02 nvmf_dif -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:37.332 06:16:02 nvmf_dif -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:37.332 06:16:02 nvmf_dif -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:23:37.332 06:16:02 nvmf_dif -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:23:37.332 06:16:02 nvmf_dif -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:23:37.332 06:16:02 nvmf_dif -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:23:37.332 06:16:02 nvmf_dif -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:23:37.591 06:16:02 nvmf_dif -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:23:37.591 06:16:02 nvmf_dif -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:23:37.591 06:16:02 nvmf_dif -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:23:37.591 06:16:02 nvmf_dif -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:23:37.591 06:16:02 nvmf_dif -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:37.592 06:16:02 nvmf_dif -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:37.592 06:16:02 nvmf_dif -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:37.592 06:16:02 nvmf_dif -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:23:37.592 06:16:02 nvmf_dif -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:23:37.592 06:16:02 nvmf_dif -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:23:37.592 06:16:03 nvmf_dif -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:23:37.592 06:16:03 nvmf_dif -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:37.592 06:16:03 nvmf_dif -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:37.592 06:16:03 nvmf_dif -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:37.592 06:16:03 nvmf_dif -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:23:37.592 06:16:03 nvmf_dif -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:23:37.592 06:16:03 nvmf_dif -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:23:37.592 06:16:03 nvmf_dif -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:37.592 06:16:03 nvmf_dif -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:23:37.592 06:16:03 nvmf_dif -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:23:37.592 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:37.592 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.072 ms 00:23:37.592 00:23:37.592 --- 10.0.0.3 ping statistics --- 00:23:37.592 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:37.592 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:23:37.592 06:16:03 nvmf_dif -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:23:37.592 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:23:37.592 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.039 ms 00:23:37.592 00:23:37.592 --- 10.0.0.4 ping statistics --- 00:23:37.592 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:37.592 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:23:37.592 06:16:03 nvmf_dif -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:37.592 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:37.592 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:23:37.592 00:23:37.592 --- 10.0.0.1 ping statistics --- 00:23:37.592 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:37.592 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:23:37.592 06:16:03 nvmf_dif -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:23:37.592 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:37.592 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.045 ms 00:23:37.592 00:23:37.592 --- 10.0.0.2 ping statistics --- 00:23:37.592 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:37.592 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:23:37.592 06:16:03 nvmf_dif -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:37.592 06:16:03 nvmf_dif -- nvmf/common.sh@457 -- # return 0 00:23:37.592 06:16:03 nvmf_dif -- nvmf/common.sh@474 -- # '[' iso == iso ']' 00:23:37.592 06:16:03 nvmf_dif -- nvmf/common.sh@475 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:23:37.851 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:23:37.851 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:23:37.851 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:23:38.110 06:16:03 nvmf_dif -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:38.111 06:16:03 nvmf_dif -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:23:38.111 06:16:03 nvmf_dif -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:23:38.111 06:16:03 nvmf_dif -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:38.111 06:16:03 nvmf_dif -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:23:38.111 06:16:03 nvmf_dif -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:23:38.111 06:16:03 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:23:38.111 06:16:03 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:23:38.111 06:16:03 nvmf_dif -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:23:38.111 06:16:03 nvmf_dif -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:38.111 06:16:03 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:23:38.111 06:16:03 nvmf_dif -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:23:38.111 06:16:03 nvmf_dif -- nvmf/common.sh@505 -- # nvmfpid=97146 00:23:38.111 06:16:03 nvmf_dif -- nvmf/common.sh@506 -- # waitforlisten 97146 00:23:38.111 06:16:03 nvmf_dif -- common/autotest_common.sh@831 -- # '[' -z 97146 ']' 00:23:38.111 06:16:03 nvmf_dif -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:38.111 06:16:03 nvmf_dif -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:38.111 06:16:03 nvmf_dif -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:38.111 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:38.111 06:16:03 nvmf_dif -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:38.111 06:16:03 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:23:38.111 [2024-10-01 06:16:03.573062] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:23:38.111 [2024-10-01 06:16:03.573166] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:38.111 [2024-10-01 06:16:03.714305] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:38.371 [2024-10-01 06:16:03.757049] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:38.371 [2024-10-01 06:16:03.757117] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:38.371 [2024-10-01 06:16:03.757131] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:38.371 [2024-10-01 06:16:03.757141] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:38.371 [2024-10-01 06:16:03.757150] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:38.371 [2024-10-01 06:16:03.757191] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:23:38.371 [2024-10-01 06:16:03.793081] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:23:38.371 06:16:03 nvmf_dif -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:38.371 06:16:03 nvmf_dif -- common/autotest_common.sh@864 -- # return 0 00:23:38.371 06:16:03 nvmf_dif -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:23:38.371 06:16:03 nvmf_dif -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:38.371 06:16:03 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:23:38.371 06:16:03 nvmf_dif -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:38.371 06:16:03 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:23:38.371 06:16:03 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:23:38.371 06:16:03 nvmf_dif -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:38.371 06:16:03 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:23:38.371 [2024-10-01 06:16:03.893663] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:38.371 06:16:03 nvmf_dif -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:38.371 06:16:03 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:23:38.371 06:16:03 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:23:38.371 06:16:03 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:38.371 06:16:03 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:23:38.371 ************************************ 00:23:38.371 START TEST fio_dif_1_default 00:23:38.371 ************************************ 00:23:38.371 06:16:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1125 -- # fio_dif_1 00:23:38.371 06:16:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:23:38.371 06:16:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:23:38.371 06:16:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:23:38.371 06:16:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:23:38.371 06:16:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:23:38.371 06:16:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:23:38.371 06:16:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:38.371 06:16:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:23:38.371 bdev_null0 00:23:38.371 06:16:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:38.371 06:16:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:23:38.371 06:16:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:38.371 06:16:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:23:38.371 06:16:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:38.371 06:16:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:23:38.371 06:16:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:38.371 06:16:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:23:38.371 06:16:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:38.371 06:16:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:23:38.371 06:16:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:38.371 06:16:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:23:38.371 [2024-10-01 06:16:03.947084] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:23:38.371 06:16:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:38.371 06:16:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:23:38.371 06:16:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:23:38.371 06:16:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:23:38.371 06:16:03 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # config=() 00:23:38.371 06:16:03 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # local subsystem config 00:23:38.371 06:16:03 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:23:38.371 06:16:03 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:23:38.371 { 00:23:38.371 "params": { 00:23:38.371 "name": "Nvme$subsystem", 00:23:38.371 "trtype": "$TEST_TRANSPORT", 00:23:38.372 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:38.372 "adrfam": "ipv4", 00:23:38.372 "trsvcid": "$NVMF_PORT", 00:23:38.372 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:38.372 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:38.372 "hdgst": ${hdgst:-false}, 00:23:38.372 "ddgst": ${ddgst:-false} 00:23:38.372 }, 00:23:38.372 "method": "bdev_nvme_attach_controller" 00:23:38.372 } 00:23:38.372 EOF 00:23:38.372 )") 00:23:38.372 06:16:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:38.372 06:16:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:23:38.372 06:16:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:23:38.372 06:16:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:23:38.372 06:16:03 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@578 -- # cat 00:23:38.372 06:16:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:38.372 06:16:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:23:38.372 06:16:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:38.372 06:16:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local sanitizers 00:23:38.372 06:16:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:38.372 06:16:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # shift 00:23:38.372 06:16:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local asan_lib= 00:23:38.372 06:16:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:23:38.372 06:16:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:23:38.372 06:16:03 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:23:38.372 06:16:03 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@580 -- # jq . 00:23:38.372 06:16:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:38.372 06:16:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libasan 00:23:38.372 06:16:03 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@581 -- # IFS=, 00:23:38.372 06:16:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:23:38.372 06:16:03 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:23:38.372 "params": { 00:23:38.372 "name": "Nvme0", 00:23:38.372 "trtype": "tcp", 00:23:38.372 "traddr": "10.0.0.3", 00:23:38.372 "adrfam": "ipv4", 00:23:38.372 "trsvcid": "4420", 00:23:38.372 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:38.372 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:23:38.372 "hdgst": false, 00:23:38.372 "ddgst": false 00:23:38.372 }, 00:23:38.372 "method": "bdev_nvme_attach_controller" 00:23:38.372 }' 00:23:38.631 06:16:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:23:38.631 06:16:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:23:38.631 06:16:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:23:38.631 06:16:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:38.631 06:16:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:23:38.631 06:16:03 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:23:38.631 06:16:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:23:38.631 06:16:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:23:38.631 06:16:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:23:38.631 06:16:04 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:38.631 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:23:38.631 fio-3.35 00:23:38.631 Starting 1 thread 00:23:50.846 00:23:50.846 filename0: (groupid=0, jobs=1): err= 0: pid=97200: Tue Oct 1 06:16:14 2024 00:23:50.846 read: IOPS=8753, BW=34.2MiB/s (35.9MB/s)(342MiB/10001msec) 00:23:50.846 slat (usec): min=5, max=686, avg= 8.95, stdev= 6.48 00:23:50.846 clat (usec): min=313, max=3973, avg=430.17, stdev=62.58 00:23:50.846 lat (usec): min=319, max=3998, avg=439.13, stdev=63.77 00:23:50.846 clat percentiles (usec): 00:23:50.846 | 1.00th=[ 334], 5.00th=[ 351], 10.00th=[ 363], 20.00th=[ 379], 00:23:50.846 | 30.00th=[ 396], 40.00th=[ 408], 50.00th=[ 424], 60.00th=[ 441], 00:23:50.846 | 70.00th=[ 457], 80.00th=[ 478], 90.00th=[ 506], 95.00th=[ 529], 00:23:50.846 | 99.00th=[ 578], 99.50th=[ 603], 99.90th=[ 734], 99.95th=[ 848], 00:23:50.846 | 99.99th=[ 1270] 00:23:50.846 bw ( KiB/s): min=33536, max=36352, per=100.00%, avg=35023.16, stdev=770.10, samples=19 00:23:50.846 iops : min= 8384, max= 9088, avg=8755.79, stdev=192.52, samples=19 00:23:50.846 lat (usec) : 500=88.54%, 750=11.37%, 1000=0.07% 00:23:50.846 lat (msec) : 2=0.02%, 4=0.01% 00:23:50.846 cpu : usr=84.01%, sys=13.36%, ctx=102, majf=0, minf=4 00:23:50.846 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:50.846 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:50.846 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:50.846 issued rwts: total=87540,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:50.846 latency : target=0, window=0, percentile=100.00%, depth=4 00:23:50.846 00:23:50.846 Run status group 0 (all jobs): 00:23:50.846 READ: bw=34.2MiB/s (35.9MB/s), 34.2MiB/s-34.2MiB/s (35.9MB/s-35.9MB/s), io=342MiB (359MB), run=10001-10001msec 00:23:50.846 06:16:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:23:50.846 06:16:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:23:50.846 06:16:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:23:50.846 06:16:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:23:50.846 06:16:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:23:50.846 06:16:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:23:50.846 06:16:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:50.846 06:16:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:23:50.846 06:16:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:50.846 06:16:14 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:23:50.846 06:16:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:50.846 06:16:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:23:50.846 06:16:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:50.846 00:23:50.846 real 0m10.901s 00:23:50.846 user 0m8.974s 00:23:50.846 sys 0m1.582s 00:23:50.846 06:16:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:50.846 06:16:14 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:23:50.846 ************************************ 00:23:50.846 END TEST fio_dif_1_default 00:23:50.846 ************************************ 00:23:50.846 06:16:14 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:23:50.846 06:16:14 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:23:50.846 06:16:14 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:50.846 06:16:14 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:23:50.846 ************************************ 00:23:50.846 START TEST fio_dif_1_multi_subsystems 00:23:50.846 ************************************ 00:23:50.846 06:16:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1125 -- # fio_dif_1_multi_subsystems 00:23:50.846 06:16:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:23:50.846 06:16:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:23:50.846 06:16:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:23:50.846 06:16:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:23:50.846 06:16:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:23:50.846 06:16:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:23:50.846 06:16:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:23:50.846 06:16:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:50.846 06:16:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:50.846 bdev_null0 00:23:50.846 06:16:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:50.846 06:16:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:23:50.846 06:16:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:50.846 06:16:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:50.846 06:16:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:50.846 06:16:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:23:50.847 06:16:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:50.847 06:16:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:50.847 06:16:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:50.847 06:16:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:23:50.847 06:16:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:50.847 06:16:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:50.847 [2024-10-01 06:16:14.899125] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:23:50.847 06:16:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:50.847 06:16:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:23:50.847 06:16:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:23:50.847 06:16:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:23:50.847 06:16:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:23:50.847 06:16:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:50.847 06:16:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:50.847 bdev_null1 00:23:50.847 06:16:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:50.847 06:16:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:23:50.847 06:16:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:50.847 06:16:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:50.847 06:16:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:50.847 06:16:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:23:50.847 06:16:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:50.847 06:16:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:50.847 06:16:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:50.847 06:16:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:23:50.847 06:16:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:50.847 06:16:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:23:50.847 06:16:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:50.847 06:16:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:23:50.847 06:16:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:23:50.847 06:16:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:23:50.847 06:16:14 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # config=() 00:23:50.847 06:16:14 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # local subsystem config 00:23:50.847 06:16:14 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:23:50.847 06:16:14 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:23:50.847 { 00:23:50.847 "params": { 00:23:50.847 "name": "Nvme$subsystem", 00:23:50.847 "trtype": "$TEST_TRANSPORT", 00:23:50.847 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:50.847 "adrfam": "ipv4", 00:23:50.847 "trsvcid": "$NVMF_PORT", 00:23:50.847 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:50.847 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:50.847 "hdgst": ${hdgst:-false}, 00:23:50.847 "ddgst": ${ddgst:-false} 00:23:50.847 }, 00:23:50.847 "method": "bdev_nvme_attach_controller" 00:23:50.847 } 00:23:50.847 EOF 00:23:50.847 )") 00:23:50.847 06:16:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:50.847 06:16:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:23:50.847 06:16:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:23:50.847 06:16:14 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@578 -- # cat 00:23:50.847 06:16:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:23:50.847 06:16:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:50.847 06:16:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:23:50.847 06:16:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:50.847 06:16:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local sanitizers 00:23:50.847 06:16:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:50.847 06:16:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # shift 00:23:50.847 06:16:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local asan_lib= 00:23:50.847 06:16:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:23:50.847 06:16:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:23:50.847 06:16:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:23:50.847 06:16:14 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:23:50.847 06:16:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:23:50.847 06:16:14 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:23:50.847 { 00:23:50.847 "params": { 00:23:50.847 "name": "Nvme$subsystem", 00:23:50.847 "trtype": "$TEST_TRANSPORT", 00:23:50.847 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:50.847 "adrfam": "ipv4", 00:23:50.847 "trsvcid": "$NVMF_PORT", 00:23:50.847 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:50.847 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:50.847 "hdgst": ${hdgst:-false}, 00:23:50.847 "ddgst": ${ddgst:-false} 00:23:50.847 }, 00:23:50.847 "method": "bdev_nvme_attach_controller" 00:23:50.847 } 00:23:50.847 EOF 00:23:50.847 )") 00:23:50.847 06:16:14 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@578 -- # cat 00:23:50.847 06:16:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:50.847 06:16:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:23:50.847 06:16:14 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:23:50.847 06:16:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libasan 00:23:50.847 06:16:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:23:50.847 06:16:14 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@580 -- # jq . 00:23:50.847 06:16:14 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@581 -- # IFS=, 00:23:50.847 06:16:14 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:23:50.847 "params": { 00:23:50.847 "name": "Nvme0", 00:23:50.847 "trtype": "tcp", 00:23:50.847 "traddr": "10.0.0.3", 00:23:50.847 "adrfam": "ipv4", 00:23:50.847 "trsvcid": "4420", 00:23:50.847 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:50.847 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:23:50.847 "hdgst": false, 00:23:50.847 "ddgst": false 00:23:50.847 }, 00:23:50.847 "method": "bdev_nvme_attach_controller" 00:23:50.847 },{ 00:23:50.847 "params": { 00:23:50.847 "name": "Nvme1", 00:23:50.847 "trtype": "tcp", 00:23:50.847 "traddr": "10.0.0.3", 00:23:50.847 "adrfam": "ipv4", 00:23:50.847 "trsvcid": "4420", 00:23:50.847 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:50.847 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:50.847 "hdgst": false, 00:23:50.847 "ddgst": false 00:23:50.847 }, 00:23:50.847 "method": "bdev_nvme_attach_controller" 00:23:50.847 }' 00:23:50.847 06:16:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:23:50.847 06:16:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:23:50.847 06:16:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:23:50.847 06:16:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:23:50.847 06:16:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:23:50.847 06:16:14 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:23:50.847 06:16:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:23:50.847 06:16:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:23:50.847 06:16:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:23:50.847 06:16:15 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:23:50.847 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:23:50.847 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:23:50.847 fio-3.35 00:23:50.847 Starting 2 threads 00:24:00.824 00:24:00.824 filename0: (groupid=0, jobs=1): err= 0: pid=97359: Tue Oct 1 06:16:25 2024 00:24:00.824 read: IOPS=4841, BW=18.9MiB/s (19.8MB/s)(189MiB/10001msec) 00:24:00.824 slat (usec): min=6, max=338, avg=13.96, stdev= 6.97 00:24:00.824 clat (usec): min=520, max=1546, avg=787.86, stdev=82.86 00:24:00.824 lat (usec): min=529, max=1572, avg=801.82, stdev=84.26 00:24:00.824 clat percentiles (usec): 00:24:00.824 | 1.00th=[ 619], 5.00th=[ 660], 10.00th=[ 685], 20.00th=[ 717], 00:24:00.824 | 30.00th=[ 742], 40.00th=[ 766], 50.00th=[ 783], 60.00th=[ 807], 00:24:00.824 | 70.00th=[ 832], 80.00th=[ 857], 90.00th=[ 898], 95.00th=[ 922], 00:24:00.825 | 99.00th=[ 988], 99.50th=[ 1020], 99.90th=[ 1172], 99.95th=[ 1270], 00:24:00.825 | 99.99th=[ 1418] 00:24:00.825 bw ( KiB/s): min=19072, max=19680, per=50.01%, avg=19376.84, stdev=201.08, samples=19 00:24:00.825 iops : min= 4768, max= 4920, avg=4844.21, stdev=50.27, samples=19 00:24:00.825 lat (usec) : 750=34.55%, 1000=64.75% 00:24:00.825 lat (msec) : 2=0.70% 00:24:00.825 cpu : usr=88.13%, sys=9.75%, ctx=118, majf=0, minf=0 00:24:00.825 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:00.825 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:00.825 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:00.825 issued rwts: total=48416,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:00.825 latency : target=0, window=0, percentile=100.00%, depth=4 00:24:00.825 filename1: (groupid=0, jobs=1): err= 0: pid=97360: Tue Oct 1 06:16:25 2024 00:24:00.825 read: IOPS=4844, BW=18.9MiB/s (19.8MB/s)(189MiB/10001msec) 00:24:00.825 slat (nsec): min=6291, max=90381, avg=13777.62, stdev=5950.08 00:24:00.825 clat (usec): min=419, max=1554, avg=787.78, stdev=73.72 00:24:00.825 lat (usec): min=429, max=1591, avg=801.56, stdev=74.43 00:24:00.825 clat percentiles (usec): 00:24:00.825 | 1.00th=[ 652], 5.00th=[ 676], 10.00th=[ 693], 20.00th=[ 717], 00:24:00.825 | 30.00th=[ 742], 40.00th=[ 766], 50.00th=[ 783], 60.00th=[ 807], 00:24:00.825 | 70.00th=[ 824], 80.00th=[ 857], 90.00th=[ 889], 95.00th=[ 914], 00:24:00.825 | 99.00th=[ 963], 99.50th=[ 979], 99.90th=[ 1029], 99.95th=[ 1057], 00:24:00.825 | 99.99th=[ 1221] 00:24:00.825 bw ( KiB/s): min=19072, max=19680, per=50.05%, avg=19390.32, stdev=193.62, samples=19 00:24:00.825 iops : min= 4768, max= 4920, avg=4847.58, stdev=48.40, samples=19 00:24:00.825 lat (usec) : 500=0.05%, 750=33.49%, 1000=66.22% 00:24:00.825 lat (msec) : 2=0.24% 00:24:00.825 cpu : usr=89.14%, sys=9.27%, ctx=12, majf=0, minf=9 00:24:00.825 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:00.825 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:00.825 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:00.825 issued rwts: total=48448,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:00.825 latency : target=0, window=0, percentile=100.00%, depth=4 00:24:00.825 00:24:00.825 Run status group 0 (all jobs): 00:24:00.825 READ: bw=37.8MiB/s (39.7MB/s), 18.9MiB/s-18.9MiB/s (19.8MB/s-19.8MB/s), io=378MiB (397MB), run=10001-10001msec 00:24:00.825 06:16:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:24:00.825 06:16:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:24:00.825 06:16:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:24:00.825 06:16:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:24:00.825 06:16:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:24:00.825 06:16:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:24:00.825 06:16:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:00.825 06:16:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:24:00.825 06:16:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:00.825 06:16:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:24:00.825 06:16:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:00.825 06:16:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:24:00.825 06:16:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:00.825 06:16:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:24:00.825 06:16:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:24:00.825 06:16:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:24:00.825 06:16:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:00.825 06:16:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:00.825 06:16:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:24:00.825 06:16:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:00.825 06:16:25 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:24:00.825 06:16:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:00.825 06:16:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:24:00.825 ************************************ 00:24:00.825 END TEST fio_dif_1_multi_subsystems 00:24:00.825 ************************************ 00:24:00.825 06:16:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:00.825 00:24:00.825 real 0m11.007s 00:24:00.825 user 0m18.374s 00:24:00.825 sys 0m2.164s 00:24:00.825 06:16:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:00.825 06:16:25 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:24:00.825 06:16:25 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:24:00.825 06:16:25 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:24:00.825 06:16:25 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:00.825 06:16:25 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:24:00.825 ************************************ 00:24:00.825 START TEST fio_dif_rand_params 00:24:00.825 ************************************ 00:24:00.825 06:16:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1125 -- # fio_dif_rand_params 00:24:00.825 06:16:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:24:00.825 06:16:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:24:00.825 06:16:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:24:00.825 06:16:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:24:00.825 06:16:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:24:00.825 06:16:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:24:00.825 06:16:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:24:00.825 06:16:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:24:00.825 06:16:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:24:00.825 06:16:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:24:00.825 06:16:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:24:00.825 06:16:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:24:00.825 06:16:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:24:00.825 06:16:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:00.825 06:16:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:00.825 bdev_null0 00:24:00.825 06:16:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:00.825 06:16:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:24:00.825 06:16:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:00.825 06:16:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:00.825 06:16:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:00.825 06:16:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:24:00.825 06:16:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:00.825 06:16:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:00.825 06:16:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:00.825 06:16:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:24:00.825 06:16:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:00.825 06:16:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:00.825 [2024-10-01 06:16:25.962924] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:24:00.825 06:16:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:00.825 06:16:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:24:00.825 06:16:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:24:00.825 06:16:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:24:00.825 06:16:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # config=() 00:24:00.825 06:16:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # local subsystem config 00:24:00.825 06:16:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:00.825 06:16:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:24:00.825 06:16:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:00.825 06:16:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:24:00.825 { 00:24:00.825 "params": { 00:24:00.825 "name": "Nvme$subsystem", 00:24:00.825 "trtype": "$TEST_TRANSPORT", 00:24:00.825 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:00.825 "adrfam": "ipv4", 00:24:00.825 "trsvcid": "$NVMF_PORT", 00:24:00.825 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:00.825 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:00.825 "hdgst": ${hdgst:-false}, 00:24:00.825 "ddgst": ${ddgst:-false} 00:24:00.825 }, 00:24:00.825 "method": "bdev_nvme_attach_controller" 00:24:00.825 } 00:24:00.825 EOF 00:24:00.825 )") 00:24:00.825 06:16:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:24:00.825 06:16:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:24:00.825 06:16:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:24:00.825 06:16:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:00.825 06:16:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:24:00.825 06:16:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:24:00.825 06:16:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:00.825 06:16:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:24:00.825 06:16:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:24:00.825 06:16:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:24:00.825 06:16:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # cat 00:24:00.825 06:16:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:24:00.825 06:16:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:24:00.825 06:16:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:00.825 06:16:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:24:00.825 06:16:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:24:00.825 06:16:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # jq . 00:24:00.825 06:16:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@581 -- # IFS=, 00:24:00.825 06:16:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:24:00.825 "params": { 00:24:00.825 "name": "Nvme0", 00:24:00.825 "trtype": "tcp", 00:24:00.825 "traddr": "10.0.0.3", 00:24:00.825 "adrfam": "ipv4", 00:24:00.825 "trsvcid": "4420", 00:24:00.825 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:00.825 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:24:00.825 "hdgst": false, 00:24:00.825 "ddgst": false 00:24:00.825 }, 00:24:00.825 "method": "bdev_nvme_attach_controller" 00:24:00.825 }' 00:24:00.825 06:16:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:24:00.825 06:16:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:24:00.825 06:16:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:24:00.825 06:16:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:24:00.825 06:16:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:00.825 06:16:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:24:00.825 06:16:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:24:00.825 06:16:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:24:00.825 06:16:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:24:00.825 06:16:26 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:00.825 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:24:00.825 ... 00:24:00.825 fio-3.35 00:24:00.825 Starting 3 threads 00:24:06.099 00:24:06.099 filename0: (groupid=0, jobs=1): err= 0: pid=97515: Tue Oct 1 06:16:31 2024 00:24:06.099 read: IOPS=257, BW=32.2MiB/s (33.8MB/s)(161MiB/5006msec) 00:24:06.099 slat (nsec): min=7450, max=71142, avg=15829.89, stdev=5993.39 00:24:06.099 clat (usec): min=8774, max=12709, avg=11605.94, stdev=332.44 00:24:06.099 lat (usec): min=8787, max=12741, avg=11621.77, stdev=332.71 00:24:06.099 clat percentiles (usec): 00:24:06.099 | 1.00th=[11076], 5.00th=[11207], 10.00th=[11207], 20.00th=[11338], 00:24:06.099 | 30.00th=[11469], 40.00th=[11469], 50.00th=[11600], 60.00th=[11600], 00:24:06.099 | 70.00th=[11731], 80.00th=[11863], 90.00th=[11994], 95.00th=[12125], 00:24:06.099 | 99.00th=[12387], 99.50th=[12518], 99.90th=[12649], 99.95th=[12649], 00:24:06.099 | 99.99th=[12649] 00:24:06.099 bw ( KiB/s): min=32256, max=33024, per=33.31%, avg=32947.20, stdev=242.86, samples=10 00:24:06.099 iops : min= 252, max= 258, avg=257.40, stdev= 1.90, samples=10 00:24:06.099 lat (msec) : 10=0.23%, 20=99.77% 00:24:06.099 cpu : usr=90.23%, sys=8.93%, ctx=5, majf=0, minf=9 00:24:06.099 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:06.099 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:06.099 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:06.099 issued rwts: total=1290,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:06.099 latency : target=0, window=0, percentile=100.00%, depth=3 00:24:06.099 filename0: (groupid=0, jobs=1): err= 0: pid=97516: Tue Oct 1 06:16:31 2024 00:24:06.099 read: IOPS=257, BW=32.2MiB/s (33.8MB/s)(161MiB/5006msec) 00:24:06.099 slat (nsec): min=7334, max=67821, avg=15924.32, stdev=5673.30 00:24:06.099 clat (usec): min=8776, max=12709, avg=11605.44, stdev=332.39 00:24:06.099 lat (usec): min=8789, max=12739, avg=11621.37, stdev=332.79 00:24:06.099 clat percentiles (usec): 00:24:06.099 | 1.00th=[11076], 5.00th=[11207], 10.00th=[11207], 20.00th=[11338], 00:24:06.099 | 30.00th=[11469], 40.00th=[11469], 50.00th=[11600], 60.00th=[11600], 00:24:06.099 | 70.00th=[11731], 80.00th=[11863], 90.00th=[11994], 95.00th=[12125], 00:24:06.099 | 99.00th=[12518], 99.50th=[12518], 99.90th=[12649], 99.95th=[12649], 00:24:06.099 | 99.99th=[12649] 00:24:06.099 bw ( KiB/s): min=32256, max=33024, per=33.31%, avg=32947.20, stdev=242.86, samples=10 00:24:06.099 iops : min= 252, max= 258, avg=257.40, stdev= 1.90, samples=10 00:24:06.099 lat (msec) : 10=0.23%, 20=99.77% 00:24:06.099 cpu : usr=89.39%, sys=9.89%, ctx=7, majf=0, minf=9 00:24:06.099 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:06.099 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:06.099 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:06.099 issued rwts: total=1290,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:06.099 latency : target=0, window=0, percentile=100.00%, depth=3 00:24:06.099 filename0: (groupid=0, jobs=1): err= 0: pid=97517: Tue Oct 1 06:16:31 2024 00:24:06.099 read: IOPS=257, BW=32.2MiB/s (33.8MB/s)(161MiB/5008msec) 00:24:06.099 slat (nsec): min=5644, max=58771, avg=10572.57, stdev=5089.78 00:24:06.099 clat (usec): min=10896, max=12952, avg=11619.17, stdev=299.14 00:24:06.099 lat (usec): min=10903, max=12963, avg=11629.74, stdev=299.43 00:24:06.099 clat percentiles (usec): 00:24:06.099 | 1.00th=[11076], 5.00th=[11207], 10.00th=[11338], 20.00th=[11338], 00:24:06.099 | 30.00th=[11469], 40.00th=[11469], 50.00th=[11600], 60.00th=[11600], 00:24:06.099 | 70.00th=[11731], 80.00th=[11863], 90.00th=[11994], 95.00th=[12125], 00:24:06.099 | 99.00th=[12387], 99.50th=[12518], 99.90th=[12911], 99.95th=[12911], 00:24:06.099 | 99.99th=[12911] 00:24:06.099 bw ( KiB/s): min=32256, max=33024, per=33.31%, avg=32947.20, stdev=242.86, samples=10 00:24:06.099 iops : min= 252, max= 258, avg=257.40, stdev= 1.90, samples=10 00:24:06.099 lat (msec) : 20=100.00% 00:24:06.099 cpu : usr=90.31%, sys=8.93%, ctx=7, majf=0, minf=9 00:24:06.099 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:06.100 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:06.100 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:06.100 issued rwts: total=1290,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:06.100 latency : target=0, window=0, percentile=100.00%, depth=3 00:24:06.100 00:24:06.100 Run status group 0 (all jobs): 00:24:06.100 READ: bw=96.6MiB/s (101MB/s), 32.2MiB/s-32.2MiB/s (33.8MB/s-33.8MB/s), io=484MiB (507MB), run=5006-5008msec 00:24:06.360 06:16:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:24:06.360 06:16:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:24:06.360 06:16:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:24:06.360 06:16:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:24:06.360 06:16:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:24:06.360 06:16:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:24:06.360 06:16:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:06.360 06:16:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:06.360 06:16:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:06.360 06:16:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:24:06.360 06:16:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:06.360 06:16:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:06.360 06:16:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:06.360 06:16:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:24:06.360 06:16:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:24:06.360 06:16:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:24:06.360 06:16:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:24:06.360 06:16:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:24:06.360 06:16:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:24:06.360 06:16:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:24:06.360 06:16:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:24:06.360 06:16:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:24:06.360 06:16:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:24:06.360 06:16:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:24:06.360 06:16:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:24:06.360 06:16:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:06.360 06:16:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:06.360 bdev_null0 00:24:06.360 06:16:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:06.360 06:16:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:24:06.360 06:16:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:06.360 06:16:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:06.360 06:16:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:06.360 06:16:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:24:06.360 06:16:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:06.360 06:16:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:06.360 06:16:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:06.360 06:16:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:24:06.360 06:16:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:06.360 06:16:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:06.360 [2024-10-01 06:16:31.843021] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:24:06.360 06:16:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:06.360 06:16:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:24:06.360 06:16:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:24:06.360 06:16:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:24:06.360 06:16:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:24:06.360 06:16:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:06.360 06:16:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:06.360 bdev_null1 00:24:06.360 06:16:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:06.360 06:16:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:24:06.360 06:16:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:06.360 06:16:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:06.360 06:16:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:06.360 06:16:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:24:06.360 06:16:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:06.360 06:16:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:06.360 06:16:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:06.360 06:16:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:24:06.360 06:16:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:06.360 06:16:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:06.360 06:16:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:06.360 06:16:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:24:06.360 06:16:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:24:06.360 06:16:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:24:06.360 06:16:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:24:06.360 06:16:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:06.360 06:16:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:06.360 bdev_null2 00:24:06.360 06:16:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:06.360 06:16:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:24:06.360 06:16:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:06.360 06:16:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:06.360 06:16:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:06.360 06:16:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:24:06.360 06:16:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:06.360 06:16:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:06.360 06:16:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:06.360 06:16:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:24:06.360 06:16:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:06.361 06:16:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:06.361 06:16:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:06.361 06:16:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:24:06.361 06:16:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:24:06.361 06:16:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:24:06.361 06:16:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # config=() 00:24:06.361 06:16:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # local subsystem config 00:24:06.361 06:16:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:06.361 06:16:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:24:06.361 06:16:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:24:06.361 { 00:24:06.361 "params": { 00:24:06.361 "name": "Nvme$subsystem", 00:24:06.361 "trtype": "$TEST_TRANSPORT", 00:24:06.361 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:06.361 "adrfam": "ipv4", 00:24:06.361 "trsvcid": "$NVMF_PORT", 00:24:06.361 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:06.361 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:06.361 "hdgst": ${hdgst:-false}, 00:24:06.361 "ddgst": ${ddgst:-false} 00:24:06.361 }, 00:24:06.361 "method": "bdev_nvme_attach_controller" 00:24:06.361 } 00:24:06.361 EOF 00:24:06.361 )") 00:24:06.361 06:16:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:06.361 06:16:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:24:06.361 06:16:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:24:06.361 06:16:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:24:06.361 06:16:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:24:06.361 06:16:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:06.361 06:16:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:24:06.361 06:16:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # cat 00:24:06.361 06:16:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:06.361 06:16:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:24:06.361 06:16:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:24:06.361 06:16:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:24:06.361 06:16:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:24:06.361 06:16:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:24:06.361 06:16:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:24:06.361 06:16:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:06.361 06:16:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:24:06.361 06:16:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:24:06.361 { 00:24:06.361 "params": { 00:24:06.361 "name": "Nvme$subsystem", 00:24:06.361 "trtype": "$TEST_TRANSPORT", 00:24:06.361 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:06.361 "adrfam": "ipv4", 00:24:06.361 "trsvcid": "$NVMF_PORT", 00:24:06.361 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:06.361 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:06.361 "hdgst": ${hdgst:-false}, 00:24:06.361 "ddgst": ${ddgst:-false} 00:24:06.361 }, 00:24:06.361 "method": "bdev_nvme_attach_controller" 00:24:06.361 } 00:24:06.361 EOF 00:24:06.361 )") 00:24:06.361 06:16:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:24:06.361 06:16:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # cat 00:24:06.361 06:16:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:24:06.361 06:16:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:24:06.361 06:16:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:24:06.361 06:16:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:24:06.361 06:16:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:24:06.361 06:16:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:24:06.361 { 00:24:06.361 "params": { 00:24:06.361 "name": "Nvme$subsystem", 00:24:06.361 "trtype": "$TEST_TRANSPORT", 00:24:06.361 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:06.361 "adrfam": "ipv4", 00:24:06.361 "trsvcid": "$NVMF_PORT", 00:24:06.361 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:06.361 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:06.361 "hdgst": ${hdgst:-false}, 00:24:06.361 "ddgst": ${ddgst:-false} 00:24:06.361 }, 00:24:06.361 "method": "bdev_nvme_attach_controller" 00:24:06.361 } 00:24:06.361 EOF 00:24:06.361 )") 00:24:06.361 06:16:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:24:06.361 06:16:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:24:06.361 06:16:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # cat 00:24:06.361 06:16:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # jq . 00:24:06.361 06:16:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@581 -- # IFS=, 00:24:06.361 06:16:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:24:06.361 "params": { 00:24:06.361 "name": "Nvme0", 00:24:06.361 "trtype": "tcp", 00:24:06.361 "traddr": "10.0.0.3", 00:24:06.361 "adrfam": "ipv4", 00:24:06.361 "trsvcid": "4420", 00:24:06.361 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:06.361 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:24:06.361 "hdgst": false, 00:24:06.361 "ddgst": false 00:24:06.361 }, 00:24:06.361 "method": "bdev_nvme_attach_controller" 00:24:06.361 },{ 00:24:06.361 "params": { 00:24:06.361 "name": "Nvme1", 00:24:06.361 "trtype": "tcp", 00:24:06.361 "traddr": "10.0.0.3", 00:24:06.361 "adrfam": "ipv4", 00:24:06.361 "trsvcid": "4420", 00:24:06.361 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:06.361 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:06.361 "hdgst": false, 00:24:06.361 "ddgst": false 00:24:06.361 }, 00:24:06.361 "method": "bdev_nvme_attach_controller" 00:24:06.361 },{ 00:24:06.361 "params": { 00:24:06.361 "name": "Nvme2", 00:24:06.361 "trtype": "tcp", 00:24:06.361 "traddr": "10.0.0.3", 00:24:06.361 "adrfam": "ipv4", 00:24:06.361 "trsvcid": "4420", 00:24:06.361 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:24:06.361 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:24:06.361 "hdgst": false, 00:24:06.361 "ddgst": false 00:24:06.361 }, 00:24:06.361 "method": "bdev_nvme_attach_controller" 00:24:06.361 }' 00:24:06.361 06:16:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:24:06.361 06:16:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:24:06.361 06:16:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:24:06.361 06:16:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:06.361 06:16:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:24:06.361 06:16:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:24:06.621 06:16:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:24:06.621 06:16:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:24:06.621 06:16:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:24:06.621 06:16:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:06.621 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:24:06.621 ... 00:24:06.621 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:24:06.621 ... 00:24:06.621 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:24:06.621 ... 00:24:06.621 fio-3.35 00:24:06.621 Starting 24 threads 00:24:18.834 00:24:18.834 filename0: (groupid=0, jobs=1): err= 0: pid=97609: Tue Oct 1 06:16:42 2024 00:24:18.834 read: IOPS=183, BW=735KiB/s (753kB/s)(7356KiB/10003msec) 00:24:18.834 slat (usec): min=8, max=3633, avg=17.12, stdev=84.53 00:24:18.834 clat (msec): min=2, max=148, avg=86.92, stdev=24.05 00:24:18.834 lat (msec): min=2, max=148, avg=86.94, stdev=24.05 00:24:18.834 clat percentiles (msec): 00:24:18.834 | 1.00th=[ 7], 5.00th=[ 51], 10.00th=[ 62], 20.00th=[ 71], 00:24:18.834 | 30.00th=[ 72], 40.00th=[ 74], 50.00th=[ 85], 60.00th=[ 93], 00:24:18.834 | 70.00th=[ 108], 80.00th=[ 109], 90.00th=[ 121], 95.00th=[ 121], 00:24:18.834 | 99.00th=[ 132], 99.50th=[ 136], 99.90th=[ 148], 99.95th=[ 148], 00:24:18.834 | 99.99th=[ 148] 00:24:18.834 bw ( KiB/s): min= 608, max= 968, per=4.05%, avg=723.37, stdev=115.48, samples=19 00:24:18.834 iops : min= 152, max= 242, avg=180.84, stdev=28.87, samples=19 00:24:18.834 lat (msec) : 4=0.33%, 10=0.87%, 50=3.64%, 100=59.54%, 250=35.62% 00:24:18.834 cpu : usr=33.39%, sys=2.24%, ctx=1038, majf=0, minf=9 00:24:18.834 IO depths : 1=0.1%, 2=1.5%, 4=6.0%, 8=77.2%, 16=15.2%, 32=0.0%, >=64=0.0% 00:24:18.834 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:18.834 complete : 0=0.0%, 4=88.6%, 8=10.1%, 16=1.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:18.834 issued rwts: total=1839,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:18.834 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:18.834 filename0: (groupid=0, jobs=1): err= 0: pid=97610: Tue Oct 1 06:16:42 2024 00:24:18.834 read: IOPS=198, BW=794KiB/s (813kB/s)(7944KiB/10005msec) 00:24:18.834 slat (usec): min=4, max=8035, avg=25.66, stdev=269.86 00:24:18.834 clat (msec): min=4, max=155, avg=80.51, stdev=25.62 00:24:18.834 lat (msec): min=4, max=155, avg=80.53, stdev=25.62 00:24:18.834 clat percentiles (msec): 00:24:18.834 | 1.00th=[ 24], 5.00th=[ 40], 10.00th=[ 48], 20.00th=[ 61], 00:24:18.834 | 30.00th=[ 70], 40.00th=[ 72], 50.00th=[ 74], 60.00th=[ 82], 00:24:18.835 | 70.00th=[ 100], 80.00th=[ 110], 90.00th=[ 117], 95.00th=[ 121], 00:24:18.835 | 99.00th=[ 127], 99.50th=[ 140], 99.90th=[ 157], 99.95th=[ 157], 00:24:18.835 | 99.99th=[ 157] 00:24:18.835 bw ( KiB/s): min= 664, max= 1176, per=4.42%, avg=789.05, stdev=161.95, samples=19 00:24:18.835 iops : min= 166, max= 294, avg=197.26, stdev=40.49, samples=19 00:24:18.835 lat (msec) : 10=0.65%, 50=13.09%, 100=56.75%, 250=29.51% 00:24:18.835 cpu : usr=38.36%, sys=2.43%, ctx=1116, majf=0, minf=9 00:24:18.835 IO depths : 1=0.1%, 2=0.2%, 4=0.6%, 8=83.6%, 16=15.6%, 32=0.0%, >=64=0.0% 00:24:18.835 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:18.835 complete : 0=0.0%, 4=86.7%, 8=13.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:18.835 issued rwts: total=1986,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:18.835 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:18.835 filename0: (groupid=0, jobs=1): err= 0: pid=97611: Tue Oct 1 06:16:42 2024 00:24:18.835 read: IOPS=188, BW=753KiB/s (771kB/s)(7584KiB/10078msec) 00:24:18.835 slat (nsec): min=7319, max=44160, avg=12319.08, stdev=4372.53 00:24:18.835 clat (msec): min=2, max=156, avg=84.83, stdev=28.46 00:24:18.835 lat (msec): min=2, max=156, avg=84.84, stdev=28.46 00:24:18.835 clat percentiles (msec): 00:24:18.835 | 1.00th=[ 3], 5.00th=[ 34], 10.00th=[ 55], 20.00th=[ 70], 00:24:18.835 | 30.00th=[ 72], 40.00th=[ 73], 50.00th=[ 83], 60.00th=[ 97], 00:24:18.835 | 70.00th=[ 108], 80.00th=[ 110], 90.00th=[ 121], 95.00th=[ 121], 00:24:18.835 | 99.00th=[ 125], 99.50th=[ 144], 99.90th=[ 157], 99.95th=[ 157], 00:24:18.835 | 99.99th=[ 157] 00:24:18.835 bw ( KiB/s): min= 552, max= 1408, per=4.21%, avg=752.00, stdev=207.82, samples=20 00:24:18.835 iops : min= 138, max= 352, avg=188.00, stdev=51.96, samples=20 00:24:18.835 lat (msec) : 4=1.79%, 10=2.43%, 50=4.80%, 100=53.16%, 250=37.82% 00:24:18.835 cpu : usr=31.34%, sys=2.07%, ctx=871, majf=0, minf=0 00:24:18.835 IO depths : 1=0.2%, 2=1.4%, 4=5.2%, 8=77.4%, 16=15.9%, 32=0.0%, >=64=0.0% 00:24:18.835 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:18.835 complete : 0=0.0%, 4=89.0%, 8=9.9%, 16=1.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:18.835 issued rwts: total=1896,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:18.835 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:18.835 filename0: (groupid=0, jobs=1): err= 0: pid=97612: Tue Oct 1 06:16:42 2024 00:24:18.835 read: IOPS=187, BW=750KiB/s (768kB/s)(7540KiB/10057msec) 00:24:18.835 slat (usec): min=6, max=7613, avg=35.88, stdev=310.81 00:24:18.835 clat (msec): min=29, max=144, avg=85.05, stdev=23.82 00:24:18.835 lat (msec): min=29, max=144, avg=85.08, stdev=23.82 00:24:18.835 clat percentiles (msec): 00:24:18.835 | 1.00th=[ 36], 5.00th=[ 47], 10.00th=[ 57], 20.00th=[ 67], 00:24:18.835 | 30.00th=[ 72], 40.00th=[ 74], 50.00th=[ 79], 60.00th=[ 86], 00:24:18.835 | 70.00th=[ 107], 80.00th=[ 112], 90.00th=[ 118], 95.00th=[ 120], 00:24:18.835 | 99.00th=[ 126], 99.50th=[ 128], 99.90th=[ 144], 99.95th=[ 144], 00:24:18.835 | 99.99th=[ 144] 00:24:18.835 bw ( KiB/s): min= 584, max= 1024, per=4.18%, avg=747.50, stdev=148.63, samples=20 00:24:18.835 iops : min= 146, max= 256, avg=186.85, stdev=37.12, samples=20 00:24:18.835 lat (msec) : 50=7.59%, 100=56.34%, 250=36.07% 00:24:18.835 cpu : usr=42.22%, sys=2.79%, ctx=1396, majf=0, minf=9 00:24:18.835 IO depths : 1=0.1%, 2=1.0%, 4=3.8%, 8=79.5%, 16=15.8%, 32=0.0%, >=64=0.0% 00:24:18.835 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:18.835 complete : 0=0.0%, 4=88.2%, 8=10.9%, 16=0.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:18.835 issued rwts: total=1885,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:18.835 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:18.835 filename0: (groupid=0, jobs=1): err= 0: pid=97613: Tue Oct 1 06:16:42 2024 00:24:18.835 read: IOPS=184, BW=736KiB/s (754kB/s)(7404KiB/10055msec) 00:24:18.835 slat (usec): min=8, max=4034, avg=17.53, stdev=93.57 00:24:18.835 clat (msec): min=33, max=149, avg=86.69, stdev=22.47 00:24:18.835 lat (msec): min=33, max=149, avg=86.71, stdev=22.47 00:24:18.835 clat percentiles (msec): 00:24:18.835 | 1.00th=[ 39], 5.00th=[ 53], 10.00th=[ 62], 20.00th=[ 70], 00:24:18.835 | 30.00th=[ 72], 40.00th=[ 75], 50.00th=[ 81], 60.00th=[ 87], 00:24:18.835 | 70.00th=[ 107], 80.00th=[ 112], 90.00th=[ 118], 95.00th=[ 121], 00:24:18.835 | 99.00th=[ 130], 99.50th=[ 130], 99.90th=[ 144], 99.95th=[ 150], 00:24:18.835 | 99.99th=[ 150] 00:24:18.835 bw ( KiB/s): min= 616, max= 992, per=4.11%, avg=734.05, stdev=118.45, samples=20 00:24:18.835 iops : min= 154, max= 248, avg=183.50, stdev=29.61, samples=20 00:24:18.835 lat (msec) : 50=4.54%, 100=60.35%, 250=35.12% 00:24:18.835 cpu : usr=42.62%, sys=3.23%, ctx=1638, majf=0, minf=9 00:24:18.835 IO depths : 1=0.1%, 2=1.3%, 4=5.1%, 8=78.2%, 16=15.3%, 32=0.0%, >=64=0.0% 00:24:18.835 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:18.835 complete : 0=0.0%, 4=88.4%, 8=10.4%, 16=1.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:18.835 issued rwts: total=1851,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:18.835 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:18.835 filename0: (groupid=0, jobs=1): err= 0: pid=97614: Tue Oct 1 06:16:42 2024 00:24:18.835 read: IOPS=190, BW=763KiB/s (782kB/s)(7688KiB/10073msec) 00:24:18.835 slat (usec): min=3, max=8036, avg=30.88, stdev=337.50 00:24:18.835 clat (usec): min=1662, max=152486, avg=83563.94, stdev=27715.15 00:24:18.835 lat (usec): min=1677, max=152496, avg=83594.82, stdev=27708.49 00:24:18.835 clat percentiles (msec): 00:24:18.835 | 1.00th=[ 6], 5.00th=[ 36], 10.00th=[ 48], 20.00th=[ 62], 00:24:18.835 | 30.00th=[ 71], 40.00th=[ 72], 50.00th=[ 81], 60.00th=[ 95], 00:24:18.835 | 70.00th=[ 107], 80.00th=[ 112], 90.00th=[ 118], 95.00th=[ 121], 00:24:18.835 | 99.00th=[ 125], 99.50th=[ 127], 99.90th=[ 148], 99.95th=[ 153], 00:24:18.835 | 99.99th=[ 153] 00:24:18.835 bw ( KiB/s): min= 584, max= 1256, per=4.27%, avg=762.40, stdev=197.19, samples=20 00:24:18.835 iops : min= 146, max= 314, avg=190.60, stdev=49.30, samples=20 00:24:18.835 lat (msec) : 2=0.10%, 4=0.21%, 10=2.29%, 50=9.89%, 100=49.58% 00:24:18.835 lat (msec) : 250=37.93% 00:24:18.835 cpu : usr=36.81%, sys=2.34%, ctx=1121, majf=0, minf=9 00:24:18.835 IO depths : 1=0.2%, 2=0.5%, 4=1.8%, 8=81.1%, 16=16.4%, 32=0.0%, >=64=0.0% 00:24:18.835 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:18.835 complete : 0=0.0%, 4=88.0%, 8=11.6%, 16=0.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:18.835 issued rwts: total=1922,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:18.835 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:18.835 filename0: (groupid=0, jobs=1): err= 0: pid=97615: Tue Oct 1 06:16:42 2024 00:24:18.835 read: IOPS=187, BW=751KiB/s (769kB/s)(7552KiB/10057msec) 00:24:18.835 slat (usec): min=7, max=8023, avg=26.37, stdev=243.47 00:24:18.835 clat (msec): min=26, max=150, avg=84.97, stdev=23.69 00:24:18.835 lat (msec): min=26, max=150, avg=85.00, stdev=23.70 00:24:18.835 clat percentiles (msec): 00:24:18.835 | 1.00th=[ 35], 5.00th=[ 48], 10.00th=[ 56], 20.00th=[ 66], 00:24:18.835 | 30.00th=[ 72], 40.00th=[ 73], 50.00th=[ 80], 60.00th=[ 85], 00:24:18.835 | 70.00th=[ 107], 80.00th=[ 109], 90.00th=[ 120], 95.00th=[ 121], 00:24:18.835 | 99.00th=[ 126], 99.50th=[ 129], 99.90th=[ 144], 99.95th=[ 150], 00:24:18.835 | 99.99th=[ 150] 00:24:18.835 bw ( KiB/s): min= 592, max= 992, per=4.19%, avg=748.70, stdev=146.37, samples=20 00:24:18.835 iops : min= 148, max= 248, avg=187.15, stdev=36.55, samples=20 00:24:18.835 lat (msec) : 50=6.99%, 100=57.57%, 250=35.43% 00:24:18.835 cpu : usr=34.33%, sys=2.23%, ctx=1031, majf=0, minf=9 00:24:18.835 IO depths : 1=0.1%, 2=1.0%, 4=3.8%, 8=79.5%, 16=15.7%, 32=0.0%, >=64=0.0% 00:24:18.835 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:18.835 complete : 0=0.0%, 4=88.2%, 8=11.0%, 16=0.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:18.835 issued rwts: total=1888,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:18.835 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:18.835 filename0: (groupid=0, jobs=1): err= 0: pid=97616: Tue Oct 1 06:16:42 2024 00:24:18.835 read: IOPS=191, BW=767KiB/s (785kB/s)(7696KiB/10034msec) 00:24:18.835 slat (usec): min=3, max=8037, avg=47.53, stdev=515.81 00:24:18.835 clat (msec): min=24, max=143, avg=83.18, stdev=24.34 00:24:18.835 lat (msec): min=24, max=143, avg=83.23, stdev=24.33 00:24:18.835 clat percentiles (msec): 00:24:18.835 | 1.00th=[ 36], 5.00th=[ 47], 10.00th=[ 49], 20.00th=[ 62], 00:24:18.836 | 30.00th=[ 72], 40.00th=[ 72], 50.00th=[ 75], 60.00th=[ 85], 00:24:18.836 | 70.00th=[ 107], 80.00th=[ 109], 90.00th=[ 120], 95.00th=[ 121], 00:24:18.836 | 99.00th=[ 121], 99.50th=[ 124], 99.90th=[ 144], 99.95th=[ 144], 00:24:18.836 | 99.99th=[ 144] 00:24:18.836 bw ( KiB/s): min= 592, max= 1152, per=4.28%, avg=765.40, stdev=159.38, samples=20 00:24:18.836 iops : min= 148, max= 288, avg=191.35, stdev=39.84, samples=20 00:24:18.836 lat (msec) : 50=10.65%, 100=57.48%, 250=31.86% 00:24:18.836 cpu : usr=31.01%, sys=2.18%, ctx=863, majf=0, minf=9 00:24:18.836 IO depths : 1=0.1%, 2=0.2%, 4=0.8%, 8=83.0%, 16=16.0%, 32=0.0%, >=64=0.0% 00:24:18.836 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:18.836 complete : 0=0.0%, 4=87.2%, 8=12.6%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:18.836 issued rwts: total=1924,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:18.836 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:18.836 filename1: (groupid=0, jobs=1): err= 0: pid=97617: Tue Oct 1 06:16:42 2024 00:24:18.836 read: IOPS=187, BW=750KiB/s (768kB/s)(7520KiB/10024msec) 00:24:18.836 slat (usec): min=4, max=7025, avg=28.81, stdev=248.71 00:24:18.836 clat (msec): min=35, max=158, avg=85.12, stdev=21.78 00:24:18.836 lat (msec): min=35, max=158, avg=85.15, stdev=21.79 00:24:18.836 clat percentiles (msec): 00:24:18.836 | 1.00th=[ 43], 5.00th=[ 57], 10.00th=[ 61], 20.00th=[ 69], 00:24:18.836 | 30.00th=[ 72], 40.00th=[ 74], 50.00th=[ 79], 60.00th=[ 84], 00:24:18.836 | 70.00th=[ 106], 80.00th=[ 110], 90.00th=[ 117], 95.00th=[ 120], 00:24:18.836 | 99.00th=[ 126], 99.50th=[ 132], 99.90th=[ 159], 99.95th=[ 159], 00:24:18.836 | 99.99th=[ 159] 00:24:18.836 bw ( KiB/s): min= 664, max= 968, per=4.18%, avg=747.45, stdev=104.91, samples=20 00:24:18.836 iops : min= 166, max= 242, avg=186.80, stdev=26.22, samples=20 00:24:18.836 lat (msec) : 50=3.99%, 100=63.46%, 250=32.55% 00:24:18.836 cpu : usr=39.68%, sys=2.89%, ctx=1386, majf=0, minf=9 00:24:18.836 IO depths : 1=0.1%, 2=1.6%, 4=6.3%, 8=77.2%, 16=14.8%, 32=0.0%, >=64=0.0% 00:24:18.836 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:18.836 complete : 0=0.0%, 4=88.5%, 8=10.1%, 16=1.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:18.836 issued rwts: total=1880,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:18.836 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:18.836 filename1: (groupid=0, jobs=1): err= 0: pid=97618: Tue Oct 1 06:16:42 2024 00:24:18.836 read: IOPS=179, BW=718KiB/s (735kB/s)(7220KiB/10061msec) 00:24:18.836 slat (usec): min=3, max=8028, avg=33.25, stdev=387.24 00:24:18.836 clat (msec): min=2, max=156, avg=88.78, stdev=27.40 00:24:18.836 lat (msec): min=2, max=156, avg=88.82, stdev=27.41 00:24:18.836 clat percentiles (msec): 00:24:18.836 | 1.00th=[ 5], 5.00th=[ 42], 10.00th=[ 59], 20.00th=[ 72], 00:24:18.836 | 30.00th=[ 73], 40.00th=[ 83], 50.00th=[ 85], 60.00th=[ 108], 00:24:18.836 | 70.00th=[ 108], 80.00th=[ 112], 90.00th=[ 121], 95.00th=[ 121], 00:24:18.836 | 99.00th=[ 144], 99.50th=[ 144], 99.90th=[ 157], 99.95th=[ 157], 00:24:18.836 | 99.99th=[ 157] 00:24:18.836 bw ( KiB/s): min= 584, max= 1282, per=4.02%, avg=718.10, stdev=172.45, samples=20 00:24:18.836 iops : min= 146, max= 320, avg=179.50, stdev=43.03, samples=20 00:24:18.836 lat (msec) : 4=0.89%, 10=2.66%, 50=3.93%, 100=51.41%, 250=41.11% 00:24:18.836 cpu : usr=31.47%, sys=2.25%, ctx=861, majf=0, minf=9 00:24:18.836 IO depths : 1=0.2%, 2=1.9%, 4=7.8%, 8=74.6%, 16=15.6%, 32=0.0%, >=64=0.0% 00:24:18.836 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:18.836 complete : 0=0.0%, 4=89.7%, 8=8.5%, 16=1.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:18.836 issued rwts: total=1805,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:18.836 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:18.836 filename1: (groupid=0, jobs=1): err= 0: pid=97619: Tue Oct 1 06:16:42 2024 00:24:18.836 read: IOPS=183, BW=733KiB/s (751kB/s)(7348KiB/10025msec) 00:24:18.836 slat (usec): min=8, max=8029, avg=41.45, stdev=412.47 00:24:18.836 clat (msec): min=26, max=144, avg=87.07, stdev=21.12 00:24:18.836 lat (msec): min=26, max=144, avg=87.11, stdev=21.12 00:24:18.836 clat percentiles (msec): 00:24:18.836 | 1.00th=[ 48], 5.00th=[ 63], 10.00th=[ 66], 20.00th=[ 71], 00:24:18.836 | 30.00th=[ 72], 40.00th=[ 74], 50.00th=[ 79], 60.00th=[ 85], 00:24:18.836 | 70.00th=[ 106], 80.00th=[ 111], 90.00th=[ 118], 95.00th=[ 121], 00:24:18.836 | 99.00th=[ 136], 99.50th=[ 136], 99.90th=[ 144], 99.95th=[ 144], 00:24:18.836 | 99.99th=[ 144] 00:24:18.836 bw ( KiB/s): min= 664, max= 912, per=4.07%, avg=727.85, stdev=77.37, samples=20 00:24:18.836 iops : min= 166, max= 228, avg=181.90, stdev=19.31, samples=20 00:24:18.836 lat (msec) : 50=1.52%, 100=64.62%, 250=33.86% 00:24:18.836 cpu : usr=39.36%, sys=2.95%, ctx=1527, majf=0, minf=9 00:24:18.836 IO depths : 1=0.1%, 2=2.3%, 4=9.3%, 8=73.9%, 16=14.5%, 32=0.0%, >=64=0.0% 00:24:18.836 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:18.836 complete : 0=0.0%, 4=89.4%, 8=8.6%, 16=2.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:18.836 issued rwts: total=1837,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:18.836 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:18.836 filename1: (groupid=0, jobs=1): err= 0: pid=97620: Tue Oct 1 06:16:42 2024 00:24:18.836 read: IOPS=187, BW=749KiB/s (767kB/s)(7508KiB/10022msec) 00:24:18.836 slat (usec): min=4, max=8030, avg=22.85, stdev=206.85 00:24:18.836 clat (msec): min=26, max=154, avg=85.27, stdev=22.32 00:24:18.836 lat (msec): min=26, max=154, avg=85.29, stdev=22.32 00:24:18.836 clat percentiles (msec): 00:24:18.836 | 1.00th=[ 42], 5.00th=[ 56], 10.00th=[ 62], 20.00th=[ 69], 00:24:18.836 | 30.00th=[ 72], 40.00th=[ 72], 50.00th=[ 78], 60.00th=[ 85], 00:24:18.836 | 70.00th=[ 105], 80.00th=[ 111], 90.00th=[ 120], 95.00th=[ 121], 00:24:18.836 | 99.00th=[ 130], 99.50th=[ 130], 99.90th=[ 155], 99.95th=[ 155], 00:24:18.836 | 99.99th=[ 155] 00:24:18.836 bw ( KiB/s): min= 664, max= 968, per=4.17%, avg=744.25, stdev=101.03, samples=20 00:24:18.836 iops : min= 166, max= 242, avg=186.05, stdev=25.26, samples=20 00:24:18.836 lat (msec) : 50=3.89%, 100=63.19%, 250=32.92% 00:24:18.836 cpu : usr=39.23%, sys=2.78%, ctx=1145, majf=0, minf=9 00:24:18.836 IO depths : 1=0.1%, 2=1.7%, 4=6.6%, 8=76.9%, 16=14.9%, 32=0.0%, >=64=0.0% 00:24:18.836 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:18.836 complete : 0=0.0%, 4=88.6%, 8=10.0%, 16=1.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:18.836 issued rwts: total=1877,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:18.836 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:18.836 filename1: (groupid=0, jobs=1): err= 0: pid=97621: Tue Oct 1 06:16:42 2024 00:24:18.836 read: IOPS=189, BW=759KiB/s (777kB/s)(7632KiB/10060msec) 00:24:18.836 slat (usec): min=3, max=2814, avg=17.03, stdev=64.59 00:24:18.836 clat (msec): min=32, max=145, avg=84.12, stdev=22.98 00:24:18.836 lat (msec): min=32, max=145, avg=84.14, stdev=22.98 00:24:18.836 clat percentiles (msec): 00:24:18.836 | 1.00th=[ 37], 5.00th=[ 48], 10.00th=[ 58], 20.00th=[ 67], 00:24:18.836 | 30.00th=[ 72], 40.00th=[ 73], 50.00th=[ 78], 60.00th=[ 84], 00:24:18.836 | 70.00th=[ 105], 80.00th=[ 111], 90.00th=[ 118], 95.00th=[ 120], 00:24:18.836 | 99.00th=[ 124], 99.50th=[ 125], 99.90th=[ 146], 99.95th=[ 146], 00:24:18.836 | 99.99th=[ 146] 00:24:18.836 bw ( KiB/s): min= 600, max= 1077, per=4.23%, avg=756.65, stdev=144.17, samples=20 00:24:18.836 iops : min= 150, max= 269, avg=189.15, stdev=36.01, samples=20 00:24:18.836 lat (msec) : 50=6.92%, 100=60.69%, 250=32.39% 00:24:18.836 cpu : usr=39.84%, sys=2.87%, ctx=1492, majf=0, minf=9 00:24:18.836 IO depths : 1=0.1%, 2=0.7%, 4=2.9%, 8=80.7%, 16=15.7%, 32=0.0%, >=64=0.0% 00:24:18.836 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:18.836 complete : 0=0.0%, 4=87.7%, 8=11.6%, 16=0.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:18.836 issued rwts: total=1908,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:18.836 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:18.836 filename1: (groupid=0, jobs=1): err= 0: pid=97622: Tue Oct 1 06:16:42 2024 00:24:18.836 read: IOPS=183, BW=734KiB/s (752kB/s)(7376KiB/10047msec) 00:24:18.836 slat (usec): min=6, max=4027, avg=23.04, stdev=164.62 00:24:18.836 clat (msec): min=37, max=139, avg=86.97, stdev=21.56 00:24:18.836 lat (msec): min=37, max=139, avg=86.99, stdev=21.56 00:24:18.836 clat percentiles (msec): 00:24:18.836 | 1.00th=[ 47], 5.00th=[ 59], 10.00th=[ 66], 20.00th=[ 70], 00:24:18.836 | 30.00th=[ 72], 40.00th=[ 74], 50.00th=[ 79], 60.00th=[ 85], 00:24:18.836 | 70.00th=[ 106], 80.00th=[ 111], 90.00th=[ 120], 95.00th=[ 122], 00:24:18.836 | 99.00th=[ 132], 99.50th=[ 140], 99.90th=[ 140], 99.95th=[ 140], 00:24:18.836 | 99.99th=[ 140] 00:24:18.836 bw ( KiB/s): min= 664, max= 920, per=4.09%, avg=730.95, stdev=88.74, samples=20 00:24:18.836 iops : min= 166, max= 230, avg=182.70, stdev=22.18, samples=20 00:24:18.836 lat (msec) : 50=1.52%, 100=63.72%, 250=34.76% 00:24:18.836 cpu : usr=42.40%, sys=3.20%, ctx=1487, majf=0, minf=9 00:24:18.836 IO depths : 1=0.1%, 2=2.0%, 4=7.9%, 8=75.3%, 16=14.8%, 32=0.0%, >=64=0.0% 00:24:18.836 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:18.836 complete : 0=0.0%, 4=89.1%, 8=9.2%, 16=1.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:18.836 issued rwts: total=1844,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:18.836 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:18.837 filename1: (groupid=0, jobs=1): err= 0: pid=97623: Tue Oct 1 06:16:42 2024 00:24:18.837 read: IOPS=193, BW=776KiB/s (795kB/s)(7760KiB/10001msec) 00:24:18.837 slat (nsec): min=3738, max=40019, avg=14816.75, stdev=5337.28 00:24:18.837 clat (msec): min=2, max=145, avg=82.41, stdev=24.89 00:24:18.837 lat (msec): min=2, max=145, avg=82.43, stdev=24.89 00:24:18.837 clat percentiles (msec): 00:24:18.837 | 1.00th=[ 7], 5.00th=[ 47], 10.00th=[ 50], 20.00th=[ 64], 00:24:18.837 | 30.00th=[ 72], 40.00th=[ 72], 50.00th=[ 75], 60.00th=[ 84], 00:24:18.837 | 70.00th=[ 102], 80.00th=[ 109], 90.00th=[ 120], 95.00th=[ 121], 00:24:18.837 | 99.00th=[ 125], 99.50th=[ 134], 99.90th=[ 146], 99.95th=[ 146], 00:24:18.837 | 99.99th=[ 146] 00:24:18.837 bw ( KiB/s): min= 640, max= 1048, per=4.28%, avg=765.89, stdev=138.25, samples=19 00:24:18.837 iops : min= 160, max= 262, avg=191.47, stdev=34.56, samples=19 00:24:18.837 lat (msec) : 4=0.31%, 10=0.82%, 50=9.74%, 100=58.87%, 250=30.26% 00:24:18.837 cpu : usr=31.26%, sys=1.94%, ctx=852, majf=0, minf=9 00:24:18.837 IO depths : 1=0.1%, 2=0.6%, 4=2.2%, 8=81.7%, 16=15.5%, 32=0.0%, >=64=0.0% 00:24:18.837 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:18.837 complete : 0=0.0%, 4=87.3%, 8=12.2%, 16=0.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:18.837 issued rwts: total=1940,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:18.837 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:18.837 filename1: (groupid=0, jobs=1): err= 0: pid=97624: Tue Oct 1 06:16:42 2024 00:24:18.837 read: IOPS=191, BW=764KiB/s (783kB/s)(7668KiB/10033msec) 00:24:18.837 slat (usec): min=4, max=8026, avg=28.49, stdev=258.84 00:24:18.837 clat (msec): min=24, max=142, avg=83.59, stdev=22.61 00:24:18.837 lat (msec): min=24, max=142, avg=83.61, stdev=22.62 00:24:18.837 clat percentiles (msec): 00:24:18.837 | 1.00th=[ 37], 5.00th=[ 48], 10.00th=[ 58], 20.00th=[ 67], 00:24:18.837 | 30.00th=[ 72], 40.00th=[ 73], 50.00th=[ 79], 60.00th=[ 83], 00:24:18.837 | 70.00th=[ 105], 80.00th=[ 111], 90.00th=[ 117], 95.00th=[ 120], 00:24:18.837 | 99.00th=[ 126], 99.50th=[ 129], 99.90th=[ 129], 99.95th=[ 144], 00:24:18.837 | 99.99th=[ 144] 00:24:18.837 bw ( KiB/s): min= 640, max= 1040, per=4.25%, avg=760.40, stdev=131.44, samples=20 00:24:18.837 iops : min= 160, max= 260, avg=190.10, stdev=32.86, samples=20 00:24:18.837 lat (msec) : 50=5.95%, 100=63.38%, 250=30.67% 00:24:18.837 cpu : usr=40.44%, sys=2.85%, ctx=1202, majf=0, minf=9 00:24:18.837 IO depths : 1=0.1%, 2=0.6%, 4=2.5%, 8=81.2%, 16=15.7%, 32=0.0%, >=64=0.0% 00:24:18.837 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:18.837 complete : 0=0.0%, 4=87.6%, 8=11.9%, 16=0.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:18.837 issued rwts: total=1917,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:18.837 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:18.837 filename2: (groupid=0, jobs=1): err= 0: pid=97625: Tue Oct 1 06:16:42 2024 00:24:18.837 read: IOPS=179, BW=717KiB/s (734kB/s)(7212KiB/10060msec) 00:24:18.837 slat (usec): min=8, max=8027, avg=23.77, stdev=224.72 00:24:18.837 clat (msec): min=26, max=147, avg=88.99, stdev=22.73 00:24:18.837 lat (msec): min=26, max=147, avg=89.01, stdev=22.73 00:24:18.837 clat percentiles (msec): 00:24:18.837 | 1.00th=[ 36], 5.00th=[ 58], 10.00th=[ 65], 20.00th=[ 71], 00:24:18.837 | 30.00th=[ 72], 40.00th=[ 77], 50.00th=[ 82], 60.00th=[ 96], 00:24:18.837 | 70.00th=[ 108], 80.00th=[ 114], 90.00th=[ 120], 95.00th=[ 121], 00:24:18.837 | 99.00th=[ 140], 99.50th=[ 140], 99.90th=[ 148], 99.95th=[ 148], 00:24:18.837 | 99.99th=[ 148] 00:24:18.837 bw ( KiB/s): min= 584, max= 920, per=4.00%, avg=714.70, stdev=113.78, samples=20 00:24:18.837 iops : min= 146, max= 230, avg=178.65, stdev=28.40, samples=20 00:24:18.837 lat (msec) : 50=2.05%, 100=60.12%, 250=37.83% 00:24:18.837 cpu : usr=34.81%, sys=2.44%, ctx=1072, majf=0, minf=9 00:24:18.837 IO depths : 1=0.1%, 2=2.1%, 4=8.3%, 8=74.3%, 16=15.2%, 32=0.0%, >=64=0.0% 00:24:18.837 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:18.837 complete : 0=0.0%, 4=89.7%, 8=8.5%, 16=1.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:18.837 issued rwts: total=1803,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:18.837 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:18.837 filename2: (groupid=0, jobs=1): err= 0: pid=97626: Tue Oct 1 06:16:42 2024 00:24:18.837 read: IOPS=181, BW=728KiB/s (745kB/s)(7284KiB/10006msec) 00:24:18.837 slat (nsec): min=4516, max=40380, avg=15096.02, stdev=5362.09 00:24:18.837 clat (msec): min=26, max=154, avg=87.82, stdev=21.83 00:24:18.837 lat (msec): min=26, max=154, avg=87.83, stdev=21.83 00:24:18.837 clat percentiles (msec): 00:24:18.837 | 1.00th=[ 48], 5.00th=[ 61], 10.00th=[ 64], 20.00th=[ 72], 00:24:18.837 | 30.00th=[ 72], 40.00th=[ 73], 50.00th=[ 83], 60.00th=[ 85], 00:24:18.837 | 70.00th=[ 108], 80.00th=[ 110], 90.00th=[ 121], 95.00th=[ 121], 00:24:18.837 | 99.00th=[ 132], 99.50th=[ 144], 99.90th=[ 155], 99.95th=[ 155], 00:24:18.837 | 99.99th=[ 155] 00:24:18.837 bw ( KiB/s): min= 640, max= 1008, per=4.06%, avg=725.05, stdev=88.85, samples=19 00:24:18.837 iops : min= 160, max= 252, avg=181.26, stdev=22.21, samples=19 00:24:18.837 lat (msec) : 50=1.54%, 100=64.80%, 250=33.66% 00:24:18.837 cpu : usr=31.39%, sys=2.15%, ctx=849, majf=0, minf=9 00:24:18.837 IO depths : 1=0.1%, 2=2.0%, 4=8.2%, 8=75.1%, 16=14.6%, 32=0.0%, >=64=0.0% 00:24:18.837 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:18.837 complete : 0=0.0%, 4=89.1%, 8=9.1%, 16=1.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:18.837 issued rwts: total=1821,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:18.837 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:18.837 filename2: (groupid=0, jobs=1): err= 0: pid=97627: Tue Oct 1 06:16:42 2024 00:24:18.837 read: IOPS=172, BW=692KiB/s (708kB/s)(6948KiB/10043msec) 00:24:18.837 slat (usec): min=3, max=8024, avg=27.34, stdev=255.34 00:24:18.837 clat (msec): min=43, max=159, avg=92.30, stdev=21.45 00:24:18.837 lat (msec): min=43, max=159, avg=92.33, stdev=21.45 00:24:18.837 clat percentiles (msec): 00:24:18.837 | 1.00th=[ 56], 5.00th=[ 64], 10.00th=[ 70], 20.00th=[ 72], 00:24:18.837 | 30.00th=[ 74], 40.00th=[ 82], 50.00th=[ 88], 60.00th=[ 97], 00:24:18.837 | 70.00th=[ 108], 80.00th=[ 114], 90.00th=[ 121], 95.00th=[ 122], 00:24:18.837 | 99.00th=[ 144], 99.50th=[ 157], 99.90th=[ 161], 99.95th=[ 161], 00:24:18.837 | 99.99th=[ 161] 00:24:18.837 bw ( KiB/s): min= 512, max= 896, per=3.85%, avg=688.45, stdev=94.86, samples=20 00:24:18.837 iops : min= 128, max= 224, avg=172.10, stdev=23.70, samples=20 00:24:18.837 lat (msec) : 50=0.35%, 100=60.28%, 250=39.38% 00:24:18.837 cpu : usr=40.46%, sys=2.73%, ctx=1313, majf=0, minf=9 00:24:18.837 IO depths : 1=0.1%, 2=3.0%, 4=12.1%, 8=70.5%, 16=14.3%, 32=0.0%, >=64=0.0% 00:24:18.837 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:18.837 complete : 0=0.0%, 4=90.4%, 8=6.9%, 16=2.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:18.837 issued rwts: total=1737,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:18.837 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:18.837 filename2: (groupid=0, jobs=1): err= 0: pid=97628: Tue Oct 1 06:16:42 2024 00:24:18.837 read: IOPS=182, BW=732KiB/s (749kB/s)(7360KiB/10061msec) 00:24:18.837 slat (usec): min=4, max=8028, avg=21.22, stdev=208.98 00:24:18.837 clat (msec): min=31, max=161, avg=87.22, stdev=23.56 00:24:18.837 lat (msec): min=31, max=161, avg=87.25, stdev=23.56 00:24:18.837 clat percentiles (msec): 00:24:18.837 | 1.00th=[ 34], 5.00th=[ 48], 10.00th=[ 58], 20.00th=[ 70], 00:24:18.837 | 30.00th=[ 72], 40.00th=[ 77], 50.00th=[ 82], 60.00th=[ 96], 00:24:18.837 | 70.00th=[ 108], 80.00th=[ 112], 90.00th=[ 120], 95.00th=[ 121], 00:24:18.837 | 99.00th=[ 125], 99.50th=[ 125], 99.90th=[ 148], 99.95th=[ 161], 00:24:18.837 | 99.99th=[ 161] 00:24:18.837 bw ( KiB/s): min= 576, max= 1000, per=4.08%, avg=729.50, stdev=134.87, samples=20 00:24:18.837 iops : min= 144, max= 250, avg=182.35, stdev=33.69, samples=20 00:24:18.837 lat (msec) : 50=6.30%, 100=55.33%, 250=38.37% 00:24:18.837 cpu : usr=39.97%, sys=2.85%, ctx=1203, majf=0, minf=9 00:24:18.837 IO depths : 1=0.1%, 2=1.1%, 4=4.2%, 8=78.8%, 16=15.8%, 32=0.0%, >=64=0.0% 00:24:18.837 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:18.837 complete : 0=0.0%, 4=88.5%, 8=10.6%, 16=0.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:18.837 issued rwts: total=1840,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:18.837 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:18.837 filename2: (groupid=0, jobs=1): err= 0: pid=97629: Tue Oct 1 06:16:42 2024 00:24:18.837 read: IOPS=194, BW=780KiB/s (798kB/s)(7804KiB/10008msec) 00:24:18.837 slat (usec): min=4, max=8032, avg=22.83, stdev=256.61 00:24:18.837 clat (msec): min=23, max=181, avg=81.96, stdev=25.34 00:24:18.837 lat (msec): min=24, max=181, avg=81.98, stdev=25.33 00:24:18.837 clat percentiles (msec): 00:24:18.837 | 1.00th=[ 34], 5.00th=[ 46], 10.00th=[ 48], 20.00th=[ 61], 00:24:18.837 | 30.00th=[ 72], 40.00th=[ 72], 50.00th=[ 73], 60.00th=[ 85], 00:24:18.837 | 70.00th=[ 99], 80.00th=[ 108], 90.00th=[ 121], 95.00th=[ 121], 00:24:18.837 | 99.00th=[ 121], 99.50th=[ 165], 99.90th=[ 182], 99.95th=[ 182], 00:24:18.837 | 99.99th=[ 182] 00:24:18.837 bw ( KiB/s): min= 576, max= 1152, per=4.36%, avg=779.79, stdev=168.20, samples=19 00:24:18.837 iops : min= 144, max= 288, avg=194.95, stdev=42.05, samples=19 00:24:18.837 lat (msec) : 50=12.87%, 100=58.38%, 250=28.75% 00:24:18.837 cpu : usr=31.71%, sys=2.16%, ctx=890, majf=0, minf=9 00:24:18.837 IO depths : 1=0.1%, 2=0.3%, 4=1.0%, 8=83.2%, 16=15.5%, 32=0.0%, >=64=0.0% 00:24:18.838 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:18.838 complete : 0=0.0%, 4=86.8%, 8=12.9%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:18.838 issued rwts: total=1951,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:18.838 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:18.838 filename2: (groupid=0, jobs=1): err= 0: pid=97630: Tue Oct 1 06:16:42 2024 00:24:18.838 read: IOPS=181, BW=725KiB/s (743kB/s)(7292KiB/10056msec) 00:24:18.838 slat (usec): min=4, max=4027, avg=18.76, stdev=133.66 00:24:18.838 clat (msec): min=33, max=155, avg=87.98, stdev=23.52 00:24:18.838 lat (msec): min=33, max=155, avg=88.00, stdev=23.51 00:24:18.838 clat percentiles (msec): 00:24:18.838 | 1.00th=[ 41], 5.00th=[ 48], 10.00th=[ 61], 20.00th=[ 69], 00:24:18.838 | 30.00th=[ 72], 40.00th=[ 75], 50.00th=[ 83], 60.00th=[ 102], 00:24:18.838 | 70.00th=[ 108], 80.00th=[ 113], 90.00th=[ 120], 95.00th=[ 121], 00:24:18.838 | 99.00th=[ 126], 99.50th=[ 130], 99.90th=[ 146], 99.95th=[ 157], 00:24:18.838 | 99.99th=[ 157] 00:24:18.838 bw ( KiB/s): min= 568, max= 992, per=4.04%, avg=722.75, stdev=144.29, samples=20 00:24:18.838 iops : min= 142, max= 248, avg=180.65, stdev=36.04, samples=20 00:24:18.838 lat (msec) : 50=6.36%, 100=52.99%, 250=40.65% 00:24:18.838 cpu : usr=41.87%, sys=2.63%, ctx=1404, majf=0, minf=9 00:24:18.838 IO depths : 1=0.1%, 2=1.0%, 4=4.1%, 8=78.7%, 16=16.1%, 32=0.0%, >=64=0.0% 00:24:18.838 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:18.838 complete : 0=0.0%, 4=88.6%, 8=10.5%, 16=0.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:18.838 issued rwts: total=1823,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:18.838 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:18.838 filename2: (groupid=0, jobs=1): err= 0: pid=97631: Tue Oct 1 06:16:42 2024 00:24:18.838 read: IOPS=191, BW=767KiB/s (785kB/s)(7688KiB/10026msec) 00:24:18.838 slat (nsec): min=8295, max=34574, avg=14769.91, stdev=4691.54 00:24:18.838 clat (msec): min=32, max=143, avg=83.33, stdev=23.30 00:24:18.838 lat (msec): min=32, max=143, avg=83.35, stdev=23.30 00:24:18.838 clat percentiles (msec): 00:24:18.838 | 1.00th=[ 39], 5.00th=[ 47], 10.00th=[ 56], 20.00th=[ 66], 00:24:18.838 | 30.00th=[ 72], 40.00th=[ 72], 50.00th=[ 77], 60.00th=[ 85], 00:24:18.838 | 70.00th=[ 104], 80.00th=[ 108], 90.00th=[ 121], 95.00th=[ 121], 00:24:18.838 | 99.00th=[ 121], 99.50th=[ 125], 99.90th=[ 131], 99.95th=[ 144], 00:24:18.838 | 99.99th=[ 144] 00:24:18.838 bw ( KiB/s): min= 640, max= 1080, per=4.28%, avg=764.00, stdev=146.00, samples=20 00:24:18.838 iops : min= 160, max= 270, avg=190.95, stdev=36.49, samples=20 00:24:18.838 lat (msec) : 50=8.22%, 100=61.08%, 250=30.70% 00:24:18.838 cpu : usr=33.71%, sys=2.21%, ctx=973, majf=0, minf=9 00:24:18.838 IO depths : 1=0.1%, 2=0.7%, 4=2.7%, 8=81.2%, 16=15.4%, 32=0.0%, >=64=0.0% 00:24:18.838 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:18.838 complete : 0=0.0%, 4=87.4%, 8=12.0%, 16=0.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:18.838 issued rwts: total=1922,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:18.838 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:18.838 filename2: (groupid=0, jobs=1): err= 0: pid=97632: Tue Oct 1 06:16:42 2024 00:24:18.838 read: IOPS=190, BW=761KiB/s (780kB/s)(7624KiB/10014msec) 00:24:18.838 slat (usec): min=4, max=8035, avg=41.08, stdev=435.37 00:24:18.838 clat (msec): min=26, max=153, avg=83.83, stdev=22.96 00:24:18.838 lat (msec): min=26, max=153, avg=83.87, stdev=22.95 00:24:18.838 clat percentiles (msec): 00:24:18.838 | 1.00th=[ 39], 5.00th=[ 48], 10.00th=[ 59], 20.00th=[ 67], 00:24:18.838 | 30.00th=[ 71], 40.00th=[ 72], 50.00th=[ 79], 60.00th=[ 84], 00:24:18.838 | 70.00th=[ 104], 80.00th=[ 110], 90.00th=[ 118], 95.00th=[ 121], 00:24:18.838 | 99.00th=[ 126], 99.50th=[ 126], 99.90th=[ 153], 99.95th=[ 155], 00:24:18.838 | 99.99th=[ 155] 00:24:18.838 bw ( KiB/s): min= 664, max= 1024, per=4.24%, avg=758.00, stdev=123.93, samples=20 00:24:18.838 iops : min= 166, max= 256, avg=189.50, stdev=30.98, samples=20 00:24:18.838 lat (msec) : 50=6.30%, 100=62.70%, 250=31.01% 00:24:18.838 cpu : usr=33.60%, sys=2.46%, ctx=1064, majf=0, minf=9 00:24:18.838 IO depths : 1=0.1%, 2=1.0%, 4=3.9%, 8=79.8%, 16=15.2%, 32=0.0%, >=64=0.0% 00:24:18.838 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:18.838 complete : 0=0.0%, 4=87.8%, 8=11.3%, 16=0.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:18.838 issued rwts: total=1906,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:18.838 latency : target=0, window=0, percentile=100.00%, depth=16 00:24:18.838 00:24:18.838 Run status group 0 (all jobs): 00:24:18.838 READ: bw=17.4MiB/s (18.3MB/s), 692KiB/s-794KiB/s (708kB/s-813kB/s), io=176MiB (184MB), run=10001-10078msec 00:24:18.838 06:16:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:24:18.838 06:16:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:24:18.838 06:16:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:24:18.838 06:16:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:24:18.838 06:16:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:24:18.838 06:16:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:24:18.838 06:16:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:18.838 06:16:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:18.838 06:16:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:18.838 06:16:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:24:18.838 06:16:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:18.838 06:16:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:18.838 06:16:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:18.838 06:16:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:24:18.838 06:16:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:24:18.838 06:16:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:24:18.838 06:16:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:18.838 06:16:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:18.838 06:16:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:18.838 06:16:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:18.838 06:16:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:24:18.838 06:16:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:18.838 06:16:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:18.838 06:16:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:18.838 06:16:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:24:18.838 06:16:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:24:18.838 06:16:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:24:18.838 06:16:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:24:18.838 06:16:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:18.838 06:16:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:18.838 06:16:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:18.838 06:16:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:24:18.838 06:16:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:18.838 06:16:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:18.838 06:16:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:18.838 06:16:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:24:18.838 06:16:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:24:18.838 06:16:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:24:18.838 06:16:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:24:18.838 06:16:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:24:18.838 06:16:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:24:18.838 06:16:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:24:18.838 06:16:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:24:18.838 06:16:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:24:18.839 06:16:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:24:18.839 06:16:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:24:18.839 06:16:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:24:18.839 06:16:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:18.839 06:16:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:18.839 bdev_null0 00:24:18.839 06:16:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:18.839 06:16:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:24:18.839 06:16:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:18.839 06:16:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:18.839 06:16:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:18.839 06:16:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:24:18.839 06:16:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:18.839 06:16:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:18.839 06:16:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:18.839 06:16:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:24:18.839 06:16:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:18.839 06:16:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:18.839 [2024-10-01 06:16:43.059878] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:24:18.839 06:16:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:18.839 06:16:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:24:18.839 06:16:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:24:18.839 06:16:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:24:18.839 06:16:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:24:18.839 06:16:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:18.839 06:16:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:18.839 bdev_null1 00:24:18.839 06:16:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:18.839 06:16:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:24:18.839 06:16:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:18.839 06:16:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:18.839 06:16:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:18.839 06:16:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:24:18.839 06:16:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:18.839 06:16:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:18.839 06:16:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:18.839 06:16:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:24:18.839 06:16:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:18.839 06:16:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:18.839 06:16:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:18.839 06:16:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:24:18.839 06:16:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:24:18.839 06:16:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:24:18.839 06:16:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # config=() 00:24:18.839 06:16:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:18.839 06:16:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # local subsystem config 00:24:18.839 06:16:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:24:18.839 06:16:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:24:18.839 06:16:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:24:18.839 06:16:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:24:18.839 { 00:24:18.839 "params": { 00:24:18.839 "name": "Nvme$subsystem", 00:24:18.839 "trtype": "$TEST_TRANSPORT", 00:24:18.839 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:18.839 "adrfam": "ipv4", 00:24:18.839 "trsvcid": "$NVMF_PORT", 00:24:18.839 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:18.839 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:18.839 "hdgst": ${hdgst:-false}, 00:24:18.839 "ddgst": ${ddgst:-false} 00:24:18.839 }, 00:24:18.839 "method": "bdev_nvme_attach_controller" 00:24:18.839 } 00:24:18.839 EOF 00:24:18.839 )") 00:24:18.839 06:16:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:24:18.839 06:16:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:18.839 06:16:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:24:18.839 06:16:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:18.839 06:16:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:24:18.839 06:16:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:18.839 06:16:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # cat 00:24:18.839 06:16:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:24:18.839 06:16:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:24:18.839 06:16:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:24:18.839 06:16:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:24:18.839 06:16:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:24:18.839 06:16:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:24:18.839 06:16:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:24:18.839 06:16:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:18.839 06:16:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:24:18.839 06:16:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:24:18.839 06:16:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:24:18.839 06:16:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:24:18.839 06:16:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:24:18.839 { 00:24:18.839 "params": { 00:24:18.839 "name": "Nvme$subsystem", 00:24:18.839 "trtype": "$TEST_TRANSPORT", 00:24:18.839 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:18.839 "adrfam": "ipv4", 00:24:18.839 "trsvcid": "$NVMF_PORT", 00:24:18.839 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:18.839 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:18.839 "hdgst": ${hdgst:-false}, 00:24:18.839 "ddgst": ${ddgst:-false} 00:24:18.839 }, 00:24:18.839 "method": "bdev_nvme_attach_controller" 00:24:18.839 } 00:24:18.839 EOF 00:24:18.839 )") 00:24:18.839 06:16:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@578 -- # cat 00:24:18.839 06:16:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@580 -- # jq . 00:24:18.839 06:16:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@581 -- # IFS=, 00:24:18.839 06:16:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:24:18.839 "params": { 00:24:18.839 "name": "Nvme0", 00:24:18.839 "trtype": "tcp", 00:24:18.839 "traddr": "10.0.0.3", 00:24:18.839 "adrfam": "ipv4", 00:24:18.839 "trsvcid": "4420", 00:24:18.839 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:18.839 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:24:18.839 "hdgst": false, 00:24:18.840 "ddgst": false 00:24:18.840 }, 00:24:18.840 "method": "bdev_nvme_attach_controller" 00:24:18.840 },{ 00:24:18.840 "params": { 00:24:18.840 "name": "Nvme1", 00:24:18.840 "trtype": "tcp", 00:24:18.840 "traddr": "10.0.0.3", 00:24:18.840 "adrfam": "ipv4", 00:24:18.840 "trsvcid": "4420", 00:24:18.840 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:18.840 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:18.840 "hdgst": false, 00:24:18.840 "ddgst": false 00:24:18.840 }, 00:24:18.840 "method": "bdev_nvme_attach_controller" 00:24:18.840 }' 00:24:18.840 06:16:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:24:18.840 06:16:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:24:18.840 06:16:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:24:18.840 06:16:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:18.840 06:16:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:24:18.840 06:16:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:24:18.840 06:16:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:24:18.840 06:16:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:24:18.840 06:16:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:24:18.840 06:16:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:18.840 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:24:18.840 ... 00:24:18.840 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:24:18.840 ... 00:24:18.840 fio-3.35 00:24:18.840 Starting 4 threads 00:24:24.116 00:24:24.116 filename0: (groupid=0, jobs=1): err= 0: pid=97774: Tue Oct 1 06:16:48 2024 00:24:24.116 read: IOPS=2519, BW=19.7MiB/s (20.6MB/s)(98.5MiB/5002msec) 00:24:24.116 slat (nsec): min=4580, max=67222, avg=11392.80, stdev=5459.79 00:24:24.116 clat (usec): min=611, max=9713, avg=3145.78, stdev=1097.71 00:24:24.116 lat (usec): min=619, max=9720, avg=3157.17, stdev=1097.57 00:24:24.116 clat percentiles (usec): 00:24:24.116 | 1.00th=[ 1270], 5.00th=[ 1352], 10.00th=[ 1418], 20.00th=[ 1532], 00:24:24.116 | 30.00th=[ 2802], 40.00th=[ 3032], 50.00th=[ 3392], 60.00th=[ 3884], 00:24:24.116 | 70.00th=[ 4015], 80.00th=[ 4113], 90.00th=[ 4293], 95.00th=[ 4359], 00:24:24.116 | 99.00th=[ 4555], 99.50th=[ 4621], 99.90th=[ 4883], 99.95th=[ 7177], 00:24:24.116 | 99.99th=[ 7308] 00:24:24.116 bw ( KiB/s): min=19232, max=20768, per=31.19%, avg=20397.33, stdev=468.94, samples=9 00:24:24.116 iops : min= 2404, max= 2596, avg=2549.67, stdev=58.62, samples=9 00:24:24.116 lat (usec) : 750=0.10%, 1000=0.21% 00:24:24.116 lat (msec) : 2=22.79%, 4=44.55%, 10=32.35% 00:24:24.116 cpu : usr=90.08%, sys=8.48%, ctx=114, majf=0, minf=9 00:24:24.116 IO depths : 1=0.1%, 2=1.0%, 4=63.2%, 8=35.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:24.116 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:24.116 complete : 0=0.0%, 4=99.6%, 8=0.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:24.116 issued rwts: total=12604,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:24.116 latency : target=0, window=0, percentile=100.00%, depth=8 00:24:24.116 filename0: (groupid=0, jobs=1): err= 0: pid=97775: Tue Oct 1 06:16:48 2024 00:24:24.116 read: IOPS=1855, BW=14.5MiB/s (15.2MB/s)(72.5MiB/5002msec) 00:24:24.116 slat (usec): min=3, max=304, avg=16.60, stdev= 7.20 00:24:24.116 clat (usec): min=1861, max=5783, avg=4247.35, stdev=220.38 00:24:24.116 lat (usec): min=1869, max=5794, avg=4263.95, stdev=220.54 00:24:24.116 clat percentiles (usec): 00:24:24.116 | 1.00th=[ 3752], 5.00th=[ 3949], 10.00th=[ 4015], 20.00th=[ 4080], 00:24:24.116 | 30.00th=[ 4146], 40.00th=[ 4178], 50.00th=[ 4228], 60.00th=[ 4293], 00:24:24.116 | 70.00th=[ 4359], 80.00th=[ 4424], 90.00th=[ 4490], 95.00th=[ 4621], 00:24:24.116 | 99.00th=[ 4752], 99.50th=[ 4948], 99.90th=[ 5211], 99.95th=[ 5342], 00:24:24.116 | 99.99th=[ 5800] 00:24:24.116 bw ( KiB/s): min=14080, max=15104, per=22.68%, avg=14833.78, stdev=309.88, samples=9 00:24:24.116 iops : min= 1760, max= 1888, avg=1854.22, stdev=38.74, samples=9 00:24:24.116 lat (msec) : 2=0.08%, 4=8.72%, 10=91.21% 00:24:24.116 cpu : usr=91.62%, sys=7.06%, ctx=106, majf=0, minf=9 00:24:24.116 IO depths : 1=0.1%, 2=25.0%, 4=50.0%, 8=25.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:24.116 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:24.116 complete : 0=0.0%, 4=90.0%, 8=10.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:24.116 issued rwts: total=9279,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:24.116 latency : target=0, window=0, percentile=100.00%, depth=8 00:24:24.116 filename1: (groupid=0, jobs=1): err= 0: pid=97776: Tue Oct 1 06:16:48 2024 00:24:24.116 read: IOPS=1855, BW=14.5MiB/s (15.2MB/s)(72.5MiB/5002msec) 00:24:24.116 slat (nsec): min=3303, max=69610, avg=16674.55, stdev=5994.08 00:24:24.116 clat (usec): min=2044, max=5422, avg=4246.43, stdev=219.54 00:24:24.116 lat (usec): min=2056, max=5443, avg=4263.10, stdev=219.81 00:24:24.116 clat percentiles (usec): 00:24:24.116 | 1.00th=[ 3752], 5.00th=[ 3916], 10.00th=[ 4015], 20.00th=[ 4080], 00:24:24.116 | 30.00th=[ 4146], 40.00th=[ 4178], 50.00th=[ 4228], 60.00th=[ 4293], 00:24:24.116 | 70.00th=[ 4359], 80.00th=[ 4424], 90.00th=[ 4490], 95.00th=[ 4621], 00:24:24.116 | 99.00th=[ 4752], 99.50th=[ 4948], 99.90th=[ 5211], 99.95th=[ 5342], 00:24:24.116 | 99.99th=[ 5407] 00:24:24.116 bw ( KiB/s): min=14208, max=14976, per=22.68%, avg=14837.00, stdev=243.64, samples=9 00:24:24.116 iops : min= 1776, max= 1872, avg=1854.56, stdev=30.44, samples=9 00:24:24.116 lat (msec) : 4=8.88%, 10=91.12% 00:24:24.116 cpu : usr=92.32%, sys=6.76%, ctx=6, majf=0, minf=0 00:24:24.116 IO depths : 1=0.1%, 2=25.0%, 4=50.0%, 8=25.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:24.116 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:24.116 complete : 0=0.0%, 4=90.0%, 8=10.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:24.116 issued rwts: total=9280,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:24.116 latency : target=0, window=0, percentile=100.00%, depth=8 00:24:24.116 filename1: (groupid=0, jobs=1): err= 0: pid=97777: Tue Oct 1 06:16:48 2024 00:24:24.116 read: IOPS=1945, BW=15.2MiB/s (15.9MB/s)(76.0MiB/5002msec) 00:24:24.116 slat (nsec): min=3574, max=85522, avg=15915.08, stdev=6451.89 00:24:24.116 clat (usec): min=1133, max=7002, avg=4052.46, stdev=560.18 00:24:24.116 lat (usec): min=1141, max=7016, avg=4068.37, stdev=560.38 00:24:24.116 clat percentiles (usec): 00:24:24.116 | 1.00th=[ 2114], 5.00th=[ 2376], 10.00th=[ 3654], 20.00th=[ 4015], 00:24:24.116 | 30.00th=[ 4080], 40.00th=[ 4146], 50.00th=[ 4228], 60.00th=[ 4228], 00:24:24.116 | 70.00th=[ 4293], 80.00th=[ 4359], 90.00th=[ 4424], 95.00th=[ 4490], 00:24:24.116 | 99.00th=[ 4686], 99.50th=[ 4752], 99.90th=[ 5080], 99.95th=[ 5080], 00:24:24.116 | 99.99th=[ 6980] 00:24:24.116 bw ( KiB/s): min=14848, max=18176, per=23.37%, avg=15288.89, stdev=1084.44, samples=9 00:24:24.116 iops : min= 1856, max= 2272, avg=1911.11, stdev=135.55, samples=9 00:24:24.116 lat (msec) : 2=0.35%, 4=17.68%, 10=81.97% 00:24:24.116 cpu : usr=91.42%, sys=7.58%, ctx=3, majf=0, minf=9 00:24:24.116 IO depths : 1=0.1%, 2=20.8%, 4=52.3%, 8=26.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:24.116 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:24.116 complete : 0=0.0%, 4=91.8%, 8=8.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:24.116 issued rwts: total=9732,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:24.116 latency : target=0, window=0, percentile=100.00%, depth=8 00:24:24.116 00:24:24.116 Run status group 0 (all jobs): 00:24:24.116 READ: bw=63.9MiB/s (67.0MB/s), 14.5MiB/s-19.7MiB/s (15.2MB/s-20.6MB/s), io=319MiB (335MB), run=5002-5002msec 00:24:24.116 06:16:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:24:24.116 06:16:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:24:24.116 06:16:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:24:24.116 06:16:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:24:24.116 06:16:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:24:24.116 06:16:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:24:24.116 06:16:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:24.116 06:16:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:24.116 06:16:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:24.116 06:16:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:24:24.116 06:16:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:24.116 06:16:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:24.116 06:16:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:24.116 06:16:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:24:24.116 06:16:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:24:24.116 06:16:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:24:24.116 06:16:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:24.116 06:16:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:24.116 06:16:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:24.116 06:16:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:24.116 06:16:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:24:24.116 06:16:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:24.116 06:16:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:24.116 ************************************ 00:24:24.116 END TEST fio_dif_rand_params 00:24:24.116 ************************************ 00:24:24.116 06:16:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:24.116 00:24:24.116 real 0m23.076s 00:24:24.116 user 2m2.276s 00:24:24.116 sys 0m9.711s 00:24:24.116 06:16:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:24.116 06:16:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:24:24.116 06:16:49 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:24:24.116 06:16:49 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:24:24.116 06:16:49 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:24.116 06:16:49 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:24:24.116 ************************************ 00:24:24.116 START TEST fio_dif_digest 00:24:24.116 ************************************ 00:24:24.116 06:16:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1125 -- # fio_dif_digest 00:24:24.116 06:16:49 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:24:24.116 06:16:49 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:24:24.116 06:16:49 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:24:24.116 06:16:49 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:24:24.116 06:16:49 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:24:24.116 06:16:49 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:24:24.116 06:16:49 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:24:24.116 06:16:49 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:24:24.116 06:16:49 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:24:24.116 06:16:49 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:24:24.116 06:16:49 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:24:24.116 06:16:49 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:24:24.116 06:16:49 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:24:24.116 06:16:49 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:24:24.116 06:16:49 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:24:24.116 06:16:49 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:24:24.116 06:16:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:24.117 06:16:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:24:24.117 bdev_null0 00:24:24.117 06:16:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:24.117 06:16:49 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:24:24.117 06:16:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:24.117 06:16:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:24:24.117 06:16:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:24.117 06:16:49 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:24:24.117 06:16:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:24.117 06:16:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:24:24.117 06:16:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:24.117 06:16:49 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:24:24.117 06:16:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:24.117 06:16:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:24:24.117 [2024-10-01 06:16:49.097153] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:24:24.117 06:16:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:24.117 06:16:49 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:24:24.117 06:16:49 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:24:24.117 06:16:49 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:24:24.117 06:16:49 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # config=() 00:24:24.117 06:16:49 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # local subsystem config 00:24:24.117 06:16:49 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # for subsystem in "${@:-1}" 00:24:24.117 06:16:49 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:24.117 06:16:49 nvmf_dif.fio_dif_digest -- nvmf/common.sh@578 -- # config+=("$(cat <<-EOF 00:24:24.117 { 00:24:24.117 "params": { 00:24:24.117 "name": "Nvme$subsystem", 00:24:24.117 "trtype": "$TEST_TRANSPORT", 00:24:24.117 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:24.117 "adrfam": "ipv4", 00:24:24.117 "trsvcid": "$NVMF_PORT", 00:24:24.117 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:24.117 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:24.117 "hdgst": ${hdgst:-false}, 00:24:24.117 "ddgst": ${ddgst:-false} 00:24:24.117 }, 00:24:24.117 "method": "bdev_nvme_attach_controller" 00:24:24.117 } 00:24:24.117 EOF 00:24:24.117 )") 00:24:24.117 06:16:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:24.117 06:16:49 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:24:24.117 06:16:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:24:24.117 06:16:49 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:24:24.117 06:16:49 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:24:24.117 06:16:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:24.117 06:16:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local sanitizers 00:24:24.117 06:16:49 nvmf_dif.fio_dif_digest -- nvmf/common.sh@578 -- # cat 00:24:24.117 06:16:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:24.117 06:16:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # shift 00:24:24.117 06:16:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local asan_lib= 00:24:24.117 06:16:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:24:24.117 06:16:49 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:24:24.117 06:16:49 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:24:24.117 06:16:49 nvmf_dif.fio_dif_digest -- nvmf/common.sh@580 -- # jq . 00:24:24.117 06:16:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:24.117 06:16:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libasan 00:24:24.117 06:16:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:24:24.117 06:16:49 nvmf_dif.fio_dif_digest -- nvmf/common.sh@581 -- # IFS=, 00:24:24.117 06:16:49 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # printf '%s\n' '{ 00:24:24.117 "params": { 00:24:24.117 "name": "Nvme0", 00:24:24.117 "trtype": "tcp", 00:24:24.117 "traddr": "10.0.0.3", 00:24:24.117 "adrfam": "ipv4", 00:24:24.117 "trsvcid": "4420", 00:24:24.117 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:24.117 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:24:24.117 "hdgst": true, 00:24:24.117 "ddgst": true 00:24:24.117 }, 00:24:24.117 "method": "bdev_nvme_attach_controller" 00:24:24.117 }' 00:24:24.117 06:16:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:24:24.117 06:16:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:24:24.117 06:16:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:24:24.117 06:16:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:24:24.117 06:16:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:24:24.117 06:16:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:24:24.117 06:16:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:24:24.117 06:16:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:24:24.117 06:16:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:24:24.117 06:16:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:24:24.117 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:24:24.117 ... 00:24:24.117 fio-3.35 00:24:24.117 Starting 3 threads 00:24:34.178 00:24:34.178 filename0: (groupid=0, jobs=1): err= 0: pid=97879: Tue Oct 1 06:16:59 2024 00:24:34.178 read: IOPS=226, BW=28.4MiB/s (29.7MB/s)(284MiB/10007msec) 00:24:34.178 slat (nsec): min=6877, max=59976, avg=15838.34, stdev=6463.51 00:24:34.178 clat (usec): min=10114, max=15448, avg=13184.69, stdev=404.35 00:24:34.178 lat (usec): min=10123, max=15461, avg=13200.52, stdev=404.96 00:24:34.178 clat percentiles (usec): 00:24:34.178 | 1.00th=[12518], 5.00th=[12649], 10.00th=[12780], 20.00th=[12911], 00:24:34.178 | 30.00th=[12911], 40.00th=[13042], 50.00th=[13173], 60.00th=[13173], 00:24:34.178 | 70.00th=[13304], 80.00th=[13435], 90.00th=[13698], 95.00th=[13960], 00:24:34.178 | 99.00th=[14222], 99.50th=[14353], 99.90th=[15401], 99.95th=[15401], 00:24:34.178 | 99.99th=[15401] 00:24:34.178 bw ( KiB/s): min=28359, max=29184, per=33.36%, avg=29059.74, stdev=295.04, samples=19 00:24:34.178 iops : min= 221, max= 228, avg=227.00, stdev= 2.38, samples=19 00:24:34.178 lat (msec) : 20=100.00% 00:24:34.178 cpu : usr=91.65%, sys=7.77%, ctx=6, majf=0, minf=9 00:24:34.178 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:34.178 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:34.178 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:34.178 issued rwts: total=2271,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:34.178 latency : target=0, window=0, percentile=100.00%, depth=3 00:24:34.178 filename0: (groupid=0, jobs=1): err= 0: pid=97880: Tue Oct 1 06:16:59 2024 00:24:34.178 read: IOPS=226, BW=28.4MiB/s (29.8MB/s)(284MiB/10005msec) 00:24:34.178 slat (nsec): min=6922, max=62579, avg=17093.25, stdev=6584.14 00:24:34.178 clat (usec): min=4517, max=15454, avg=13177.59, stdev=493.37 00:24:34.178 lat (usec): min=4533, max=15469, avg=13194.68, stdev=493.98 00:24:34.178 clat percentiles (usec): 00:24:34.178 | 1.00th=[12518], 5.00th=[12649], 10.00th=[12780], 20.00th=[12911], 00:24:34.178 | 30.00th=[12911], 40.00th=[13042], 50.00th=[13173], 60.00th=[13173], 00:24:34.178 | 70.00th=[13304], 80.00th=[13435], 90.00th=[13698], 95.00th=[13960], 00:24:34.178 | 99.00th=[14222], 99.50th=[14353], 99.90th=[15401], 99.95th=[15401], 00:24:34.178 | 99.99th=[15401] 00:24:34.179 bw ( KiB/s): min=28416, max=29952, per=33.32%, avg=29022.32, stdev=484.30, samples=19 00:24:34.179 iops : min= 222, max= 234, avg=226.74, stdev= 3.78, samples=19 00:24:34.179 lat (msec) : 10=0.13%, 20=99.87% 00:24:34.179 cpu : usr=91.34%, sys=7.99%, ctx=80, majf=0, minf=0 00:24:34.179 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:34.179 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:34.179 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:34.179 issued rwts: total=2271,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:34.179 latency : target=0, window=0, percentile=100.00%, depth=3 00:24:34.179 filename0: (groupid=0, jobs=1): err= 0: pid=97881: Tue Oct 1 06:16:59 2024 00:24:34.179 read: IOPS=226, BW=28.3MiB/s (29.7MB/s)(284MiB/10001msec) 00:24:34.179 slat (nsec): min=6849, max=76956, avg=17183.58, stdev=7016.45 00:24:34.179 clat (usec): min=12431, max=15454, avg=13189.17, stdev=381.46 00:24:34.179 lat (usec): min=12460, max=15469, avg=13206.36, stdev=382.17 00:24:34.179 clat percentiles (usec): 00:24:34.179 | 1.00th=[12518], 5.00th=[12649], 10.00th=[12780], 20.00th=[12911], 00:24:34.179 | 30.00th=[12911], 40.00th=[13042], 50.00th=[13173], 60.00th=[13173], 00:24:34.179 | 70.00th=[13304], 80.00th=[13435], 90.00th=[13698], 95.00th=[13960], 00:24:34.179 | 99.00th=[14353], 99.50th=[14484], 99.90th=[15401], 99.95th=[15401], 00:24:34.179 | 99.99th=[15401] 00:24:34.179 bw ( KiB/s): min=28416, max=29952, per=33.32%, avg=29022.32, stdev=484.30, samples=19 00:24:34.179 iops : min= 222, max= 234, avg=226.74, stdev= 3.78, samples=19 00:24:34.179 lat (msec) : 20=100.00% 00:24:34.179 cpu : usr=91.95%, sys=7.39%, ctx=6, majf=0, minf=9 00:24:34.179 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:34.179 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:34.179 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:34.179 issued rwts: total=2268,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:34.179 latency : target=0, window=0, percentile=100.00%, depth=3 00:24:34.179 00:24:34.179 Run status group 0 (all jobs): 00:24:34.179 READ: bw=85.1MiB/s (89.2MB/s), 28.3MiB/s-28.4MiB/s (29.7MB/s-29.8MB/s), io=851MiB (893MB), run=10001-10007msec 00:24:34.438 06:16:59 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:24:34.438 06:16:59 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:24:34.438 06:16:59 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:24:34.438 06:16:59 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:24:34.438 06:16:59 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:24:34.438 06:16:59 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:24:34.438 06:16:59 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:34.438 06:16:59 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:24:34.438 06:16:59 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:34.438 06:16:59 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:24:34.438 06:16:59 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:34.438 06:16:59 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:24:34.438 ************************************ 00:24:34.438 END TEST fio_dif_digest 00:24:34.438 ************************************ 00:24:34.438 06:16:59 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:34.438 00:24:34.438 real 0m10.869s 00:24:34.438 user 0m28.081s 00:24:34.438 sys 0m2.541s 00:24:34.438 06:16:59 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:34.438 06:16:59 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:24:34.438 06:16:59 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:24:34.438 06:16:59 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:24:34.438 06:16:59 nvmf_dif -- nvmf/common.sh@512 -- # nvmfcleanup 00:24:34.438 06:16:59 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:24:34.438 06:17:00 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:34.438 06:17:00 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:24:34.438 06:17:00 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:34.438 06:17:00 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:34.438 rmmod nvme_tcp 00:24:34.697 rmmod nvme_fabrics 00:24:34.697 rmmod nvme_keyring 00:24:34.697 06:17:00 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:34.697 06:17:00 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:24:34.697 06:17:00 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:24:34.697 06:17:00 nvmf_dif -- nvmf/common.sh@513 -- # '[' -n 97146 ']' 00:24:34.697 06:17:00 nvmf_dif -- nvmf/common.sh@514 -- # killprocess 97146 00:24:34.697 06:17:00 nvmf_dif -- common/autotest_common.sh@950 -- # '[' -z 97146 ']' 00:24:34.697 06:17:00 nvmf_dif -- common/autotest_common.sh@954 -- # kill -0 97146 00:24:34.697 06:17:00 nvmf_dif -- common/autotest_common.sh@955 -- # uname 00:24:34.697 06:17:00 nvmf_dif -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:34.697 06:17:00 nvmf_dif -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 97146 00:24:34.697 killing process with pid 97146 00:24:34.697 06:17:00 nvmf_dif -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:34.697 06:17:00 nvmf_dif -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:34.697 06:17:00 nvmf_dif -- common/autotest_common.sh@968 -- # echo 'killing process with pid 97146' 00:24:34.697 06:17:00 nvmf_dif -- common/autotest_common.sh@969 -- # kill 97146 00:24:34.697 06:17:00 nvmf_dif -- common/autotest_common.sh@974 -- # wait 97146 00:24:34.697 06:17:00 nvmf_dif -- nvmf/common.sh@516 -- # '[' iso == iso ']' 00:24:34.697 06:17:00 nvmf_dif -- nvmf/common.sh@517 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:24:35.265 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:24:35.265 Waiting for block devices as requested 00:24:35.265 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:24:35.265 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:24:35.524 06:17:00 nvmf_dif -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:24:35.524 06:17:00 nvmf_dif -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:24:35.524 06:17:00 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:24:35.524 06:17:00 nvmf_dif -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:24:35.524 06:17:00 nvmf_dif -- nvmf/common.sh@787 -- # iptables-restore 00:24:35.524 06:17:00 nvmf_dif -- nvmf/common.sh@787 -- # iptables-save 00:24:35.524 06:17:00 nvmf_dif -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:35.524 06:17:00 nvmf_dif -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:24:35.525 06:17:00 nvmf_dif -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:24:35.525 06:17:00 nvmf_dif -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:24:35.525 06:17:00 nvmf_dif -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:24:35.525 06:17:00 nvmf_dif -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:24:35.525 06:17:00 nvmf_dif -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:24:35.525 06:17:00 nvmf_dif -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:24:35.525 06:17:00 nvmf_dif -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:24:35.525 06:17:00 nvmf_dif -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:24:35.525 06:17:00 nvmf_dif -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:24:35.525 06:17:01 nvmf_dif -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:24:35.525 06:17:01 nvmf_dif -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:24:35.525 06:17:01 nvmf_dif -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:35.525 06:17:01 nvmf_dif -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:35.525 06:17:01 nvmf_dif -- nvmf/common.sh@246 -- # remove_spdk_ns 00:24:35.525 06:17:01 nvmf_dif -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:35.525 06:17:01 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:24:35.525 06:17:01 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:35.784 06:17:01 nvmf_dif -- nvmf/common.sh@300 -- # return 0 00:24:35.784 ************************************ 00:24:35.784 END TEST nvmf_dif 00:24:35.784 ************************************ 00:24:35.784 00:24:35.784 real 0m58.713s 00:24:35.784 user 3m44.075s 00:24:35.784 sys 0m20.896s 00:24:35.784 06:17:01 nvmf_dif -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:35.784 06:17:01 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:24:35.784 06:17:01 -- spdk/autotest.sh@286 -- # run_test nvmf_abort_qd_sizes /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:24:35.784 06:17:01 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:24:35.784 06:17:01 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:35.784 06:17:01 -- common/autotest_common.sh@10 -- # set +x 00:24:35.784 ************************************ 00:24:35.784 START TEST nvmf_abort_qd_sizes 00:24:35.784 ************************************ 00:24:35.784 06:17:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:24:35.784 * Looking for test storage... 00:24:35.784 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:24:35.784 06:17:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:24:35.784 06:17:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@1681 -- # lcov --version 00:24:35.784 06:17:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:24:35.784 06:17:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:24:35.784 06:17:01 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:35.784 06:17:01 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:35.784 06:17:01 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:35.784 06:17:01 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:24:35.784 06:17:01 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:24:35.784 06:17:01 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:24:35.784 06:17:01 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:24:35.784 06:17:01 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:24:35.784 06:17:01 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:24:35.784 06:17:01 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:24:35.784 06:17:01 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:35.784 06:17:01 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:24:35.784 06:17:01 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:24:35.784 06:17:01 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:35.784 06:17:01 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:35.784 06:17:01 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:24:36.044 06:17:01 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:24:36.044 06:17:01 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:36.044 06:17:01 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:24:36.044 06:17:01 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:24:36.044 06:17:01 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:24:36.044 06:17:01 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:24:36.044 06:17:01 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:36.044 06:17:01 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:24:36.044 06:17:01 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:24:36.044 06:17:01 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:36.044 06:17:01 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:36.044 06:17:01 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:24:36.044 06:17:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:36.044 06:17:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:24:36.044 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:36.044 --rc genhtml_branch_coverage=1 00:24:36.044 --rc genhtml_function_coverage=1 00:24:36.044 --rc genhtml_legend=1 00:24:36.044 --rc geninfo_all_blocks=1 00:24:36.044 --rc geninfo_unexecuted_blocks=1 00:24:36.044 00:24:36.044 ' 00:24:36.044 06:17:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:24:36.044 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:36.044 --rc genhtml_branch_coverage=1 00:24:36.044 --rc genhtml_function_coverage=1 00:24:36.044 --rc genhtml_legend=1 00:24:36.044 --rc geninfo_all_blocks=1 00:24:36.044 --rc geninfo_unexecuted_blocks=1 00:24:36.044 00:24:36.044 ' 00:24:36.044 06:17:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:24:36.044 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:36.044 --rc genhtml_branch_coverage=1 00:24:36.044 --rc genhtml_function_coverage=1 00:24:36.044 --rc genhtml_legend=1 00:24:36.044 --rc geninfo_all_blocks=1 00:24:36.044 --rc geninfo_unexecuted_blocks=1 00:24:36.044 00:24:36.044 ' 00:24:36.044 06:17:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:24:36.044 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:36.044 --rc genhtml_branch_coverage=1 00:24:36.044 --rc genhtml_function_coverage=1 00:24:36.044 --rc genhtml_legend=1 00:24:36.044 --rc geninfo_all_blocks=1 00:24:36.044 --rc geninfo_unexecuted_blocks=1 00:24:36.044 00:24:36.044 ' 00:24:36.044 06:17:01 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:36.044 06:17:01 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:24:36.044 06:17:01 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:36.044 06:17:01 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:36.044 06:17:01 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:36.044 06:17:01 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:36.044 06:17:01 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:36.044 06:17:01 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:36.044 06:17:01 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:36.044 06:17:01 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:36.044 06:17:01 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:36.044 06:17:01 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:36.044 06:17:01 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 00:24:36.044 06:17:01 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=a979a798-a221-4879-b3c4-5aaa753fde06 00:24:36.044 06:17:01 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:36.044 06:17:01 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:36.044 06:17:01 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:36.044 06:17:01 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:36.044 06:17:01 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:36.044 06:17:01 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:24:36.044 06:17:01 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:36.044 06:17:01 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:36.044 06:17:01 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:36.044 06:17:01 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:36.044 06:17:01 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:36.044 06:17:01 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:36.044 06:17:01 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:24:36.044 06:17:01 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:36.044 06:17:01 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:24:36.044 06:17:01 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:36.044 06:17:01 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:36.044 06:17:01 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:36.044 06:17:01 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:36.044 06:17:01 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:36.044 06:17:01 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:36.044 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:36.044 06:17:01 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:36.044 06:17:01 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:36.044 06:17:01 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:36.044 06:17:01 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:24:36.044 06:17:01 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # '[' -z tcp ']' 00:24:36.044 06:17:01 nvmf_abort_qd_sizes -- nvmf/common.sh@470 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:36.044 06:17:01 nvmf_abort_qd_sizes -- nvmf/common.sh@472 -- # prepare_net_devs 00:24:36.044 06:17:01 nvmf_abort_qd_sizes -- nvmf/common.sh@434 -- # local -g is_hw=no 00:24:36.044 06:17:01 nvmf_abort_qd_sizes -- nvmf/common.sh@436 -- # remove_spdk_ns 00:24:36.044 06:17:01 nvmf_abort_qd_sizes -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:36.044 06:17:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:24:36.044 06:17:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:36.044 06:17:01 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # [[ virt != virt ]] 00:24:36.044 06:17:01 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # [[ no == yes ]] 00:24:36.044 06:17:01 nvmf_abort_qd_sizes -- nvmf/common.sh@447 -- # [[ virt == phy ]] 00:24:36.044 06:17:01 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # [[ virt == phy-fallback ]] 00:24:36.044 06:17:01 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == tcp ]] 00:24:36.044 06:17:01 nvmf_abort_qd_sizes -- nvmf/common.sh@456 -- # nvmf_veth_init 00:24:36.044 06:17:01 nvmf_abort_qd_sizes -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:36.044 06:17:01 nvmf_abort_qd_sizes -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:24:36.044 06:17:01 nvmf_abort_qd_sizes -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:24:36.044 06:17:01 nvmf_abort_qd_sizes -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:24:36.044 06:17:01 nvmf_abort_qd_sizes -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:36.044 06:17:01 nvmf_abort_qd_sizes -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:24:36.044 06:17:01 nvmf_abort_qd_sizes -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:24:36.044 06:17:01 nvmf_abort_qd_sizes -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:24:36.044 06:17:01 nvmf_abort_qd_sizes -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:24:36.044 06:17:01 nvmf_abort_qd_sizes -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:24:36.044 06:17:01 nvmf_abort_qd_sizes -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:24:36.045 06:17:01 nvmf_abort_qd_sizes -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:36.045 06:17:01 nvmf_abort_qd_sizes -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:24:36.045 06:17:01 nvmf_abort_qd_sizes -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:24:36.045 06:17:01 nvmf_abort_qd_sizes -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:24:36.045 06:17:01 nvmf_abort_qd_sizes -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:24:36.045 06:17:01 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:24:36.045 Cannot find device "nvmf_init_br" 00:24:36.045 06:17:01 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # true 00:24:36.045 06:17:01 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:24:36.045 Cannot find device "nvmf_init_br2" 00:24:36.045 06:17:01 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # true 00:24:36.045 06:17:01 nvmf_abort_qd_sizes -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:24:36.045 Cannot find device "nvmf_tgt_br" 00:24:36.045 06:17:01 nvmf_abort_qd_sizes -- nvmf/common.sh@164 -- # true 00:24:36.045 06:17:01 nvmf_abort_qd_sizes -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:24:36.045 Cannot find device "nvmf_tgt_br2" 00:24:36.045 06:17:01 nvmf_abort_qd_sizes -- nvmf/common.sh@165 -- # true 00:24:36.045 06:17:01 nvmf_abort_qd_sizes -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:24:36.045 Cannot find device "nvmf_init_br" 00:24:36.045 06:17:01 nvmf_abort_qd_sizes -- nvmf/common.sh@166 -- # true 00:24:36.045 06:17:01 nvmf_abort_qd_sizes -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:24:36.045 Cannot find device "nvmf_init_br2" 00:24:36.045 06:17:01 nvmf_abort_qd_sizes -- nvmf/common.sh@167 -- # true 00:24:36.045 06:17:01 nvmf_abort_qd_sizes -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:24:36.045 Cannot find device "nvmf_tgt_br" 00:24:36.045 06:17:01 nvmf_abort_qd_sizes -- nvmf/common.sh@168 -- # true 00:24:36.045 06:17:01 nvmf_abort_qd_sizes -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:24:36.045 Cannot find device "nvmf_tgt_br2" 00:24:36.045 06:17:01 nvmf_abort_qd_sizes -- nvmf/common.sh@169 -- # true 00:24:36.045 06:17:01 nvmf_abort_qd_sizes -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:24:36.045 Cannot find device "nvmf_br" 00:24:36.045 06:17:01 nvmf_abort_qd_sizes -- nvmf/common.sh@170 -- # true 00:24:36.045 06:17:01 nvmf_abort_qd_sizes -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:24:36.045 Cannot find device "nvmf_init_if" 00:24:36.045 06:17:01 nvmf_abort_qd_sizes -- nvmf/common.sh@171 -- # true 00:24:36.045 06:17:01 nvmf_abort_qd_sizes -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:24:36.045 Cannot find device "nvmf_init_if2" 00:24:36.045 06:17:01 nvmf_abort_qd_sizes -- nvmf/common.sh@172 -- # true 00:24:36.045 06:17:01 nvmf_abort_qd_sizes -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:36.045 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:36.045 06:17:01 nvmf_abort_qd_sizes -- nvmf/common.sh@173 -- # true 00:24:36.045 06:17:01 nvmf_abort_qd_sizes -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:36.045 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:36.045 06:17:01 nvmf_abort_qd_sizes -- nvmf/common.sh@174 -- # true 00:24:36.045 06:17:01 nvmf_abort_qd_sizes -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:24:36.045 06:17:01 nvmf_abort_qd_sizes -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:24:36.045 06:17:01 nvmf_abort_qd_sizes -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:24:36.045 06:17:01 nvmf_abort_qd_sizes -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:24:36.045 06:17:01 nvmf_abort_qd_sizes -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:24:36.304 06:17:01 nvmf_abort_qd_sizes -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:24:36.304 06:17:01 nvmf_abort_qd_sizes -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:24:36.304 06:17:01 nvmf_abort_qd_sizes -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:24:36.304 06:17:01 nvmf_abort_qd_sizes -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:24:36.304 06:17:01 nvmf_abort_qd_sizes -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:24:36.304 06:17:01 nvmf_abort_qd_sizes -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:24:36.304 06:17:01 nvmf_abort_qd_sizes -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:24:36.304 06:17:01 nvmf_abort_qd_sizes -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:24:36.304 06:17:01 nvmf_abort_qd_sizes -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:24:36.304 06:17:01 nvmf_abort_qd_sizes -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:24:36.304 06:17:01 nvmf_abort_qd_sizes -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:24:36.304 06:17:01 nvmf_abort_qd_sizes -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:24:36.304 06:17:01 nvmf_abort_qd_sizes -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:24:36.304 06:17:01 nvmf_abort_qd_sizes -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:24:36.304 06:17:01 nvmf_abort_qd_sizes -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:24:36.304 06:17:01 nvmf_abort_qd_sizes -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:24:36.304 06:17:01 nvmf_abort_qd_sizes -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:24:36.304 06:17:01 nvmf_abort_qd_sizes -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:24:36.304 06:17:01 nvmf_abort_qd_sizes -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:24:36.304 06:17:01 nvmf_abort_qd_sizes -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:24:36.304 06:17:01 nvmf_abort_qd_sizes -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:24:36.304 06:17:01 nvmf_abort_qd_sizes -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:24:36.304 06:17:01 nvmf_abort_qd_sizes -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:24:36.304 06:17:01 nvmf_abort_qd_sizes -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:24:36.304 06:17:01 nvmf_abort_qd_sizes -- nvmf/common.sh@786 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:24:36.304 06:17:01 nvmf_abort_qd_sizes -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:24:36.304 06:17:01 nvmf_abort_qd_sizes -- nvmf/common.sh@786 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:24:36.304 06:17:01 nvmf_abort_qd_sizes -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:24:36.304 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:24:36.304 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.053 ms 00:24:36.304 00:24:36.304 --- 10.0.0.3 ping statistics --- 00:24:36.304 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:36.304 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:24:36.304 06:17:01 nvmf_abort_qd_sizes -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:24:36.304 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:24:36.304 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.040 ms 00:24:36.304 00:24:36.304 --- 10.0.0.4 ping statistics --- 00:24:36.304 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:36.304 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:24:36.304 06:17:01 nvmf_abort_qd_sizes -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:24:36.304 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:36.304 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:24:36.304 00:24:36.304 --- 10.0.0.1 ping statistics --- 00:24:36.304 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:36.304 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:24:36.304 06:17:01 nvmf_abort_qd_sizes -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:24:36.304 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:36.304 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.072 ms 00:24:36.304 00:24:36.304 --- 10.0.0.2 ping statistics --- 00:24:36.304 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:36.304 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:24:36.304 06:17:01 nvmf_abort_qd_sizes -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:36.304 06:17:01 nvmf_abort_qd_sizes -- nvmf/common.sh@457 -- # return 0 00:24:36.304 06:17:01 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # '[' iso == iso ']' 00:24:36.304 06:17:01 nvmf_abort_qd_sizes -- nvmf/common.sh@475 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:24:37.243 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:24:37.243 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:24:37.243 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:24:37.243 06:17:02 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:37.243 06:17:02 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # [[ tcp == \r\d\m\a ]] 00:24:37.243 06:17:02 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # [[ tcp == \t\c\p ]] 00:24:37.243 06:17:02 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:37.243 06:17:02 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' tcp == tcp ']' 00:24:37.243 06:17:02 nvmf_abort_qd_sizes -- nvmf/common.sh@498 -- # modprobe nvme-tcp 00:24:37.243 06:17:02 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:24:37.243 06:17:02 nvmf_abort_qd_sizes -- nvmf/common.sh@503 -- # timing_enter start_nvmf_tgt 00:24:37.243 06:17:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:37.243 06:17:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:24:37.243 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:37.243 06:17:02 nvmf_abort_qd_sizes -- nvmf/common.sh@505 -- # nvmfpid=98530 00:24:37.243 06:17:02 nvmf_abort_qd_sizes -- nvmf/common.sh@506 -- # waitforlisten 98530 00:24:37.243 06:17:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@831 -- # '[' -z 98530 ']' 00:24:37.243 06:17:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:37.243 06:17:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:37.243 06:17:02 nvmf_abort_qd_sizes -- nvmf/common.sh@504 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:24:37.243 06:17:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:37.243 06:17:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:37.243 06:17:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:24:37.243 [2024-10-01 06:17:02.802942] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:24:37.243 [2024-10-01 06:17:02.803255] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:37.502 [2024-10-01 06:17:02.945818] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:37.502 [2024-10-01 06:17:02.991238] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:37.502 [2024-10-01 06:17:02.991302] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:37.502 [2024-10-01 06:17:02.991317] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:37.502 [2024-10-01 06:17:02.991328] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:37.502 [2024-10-01 06:17:02.991337] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:37.502 [2024-10-01 06:17:02.991502] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:24:37.502 [2024-10-01 06:17:02.991667] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:24:37.502 [2024-10-01 06:17:02.992241] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:24:37.502 [2024-10-01 06:17:02.992303] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:24:37.502 [2024-10-01 06:17:03.028535] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:24:37.502 06:17:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:37.502 06:17:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # return 0 00:24:37.502 06:17:03 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_exit start_nvmf_tgt 00:24:37.502 06:17:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:37.502 06:17:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:24:37.763 06:17:03 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:37.763 06:17:03 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:24:37.763 06:17:03 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:24:37.763 06:17:03 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:24:37.763 06:17:03 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:24:37.763 06:17:03 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:24:37.763 06:17:03 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n '' ]] 00:24:37.763 06:17:03 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:24:37.763 06:17:03 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:24:37.763 06:17:03 nvmf_abort_qd_sizes -- scripts/common.sh@298 -- # local bdf= 00:24:37.763 06:17:03 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:24:37.763 06:17:03 nvmf_abort_qd_sizes -- scripts/common.sh@233 -- # local class 00:24:37.763 06:17:03 nvmf_abort_qd_sizes -- scripts/common.sh@234 -- # local subclass 00:24:37.763 06:17:03 nvmf_abort_qd_sizes -- scripts/common.sh@235 -- # local progif 00:24:37.763 06:17:03 nvmf_abort_qd_sizes -- scripts/common.sh@236 -- # printf %02x 1 00:24:37.763 06:17:03 nvmf_abort_qd_sizes -- scripts/common.sh@236 -- # class=01 00:24:37.763 06:17:03 nvmf_abort_qd_sizes -- scripts/common.sh@237 -- # printf %02x 8 00:24:37.763 06:17:03 nvmf_abort_qd_sizes -- scripts/common.sh@237 -- # subclass=08 00:24:37.763 06:17:03 nvmf_abort_qd_sizes -- scripts/common.sh@238 -- # printf %02x 2 00:24:37.763 06:17:03 nvmf_abort_qd_sizes -- scripts/common.sh@238 -- # progif=02 00:24:37.763 06:17:03 nvmf_abort_qd_sizes -- scripts/common.sh@240 -- # hash lspci 00:24:37.763 06:17:03 nvmf_abort_qd_sizes -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:24:37.763 06:17:03 nvmf_abort_qd_sizes -- scripts/common.sh@242 -- # lspci -mm -n -D 00:24:37.763 06:17:03 nvmf_abort_qd_sizes -- scripts/common.sh@243 -- # grep -i -- -p02 00:24:37.763 06:17:03 nvmf_abort_qd_sizes -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:24:37.763 06:17:03 nvmf_abort_qd_sizes -- scripts/common.sh@245 -- # tr -d '"' 00:24:37.763 06:17:03 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:24:37.763 06:17:03 nvmf_abort_qd_sizes -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:24:37.763 06:17:03 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # local i 00:24:37.763 06:17:03 nvmf_abort_qd_sizes -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:24:37.763 06:17:03 nvmf_abort_qd_sizes -- scripts/common.sh@25 -- # [[ -z '' ]] 00:24:37.763 06:17:03 nvmf_abort_qd_sizes -- scripts/common.sh@27 -- # return 0 00:24:37.763 06:17:03 nvmf_abort_qd_sizes -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:24:37.763 06:17:03 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:24:37.763 06:17:03 nvmf_abort_qd_sizes -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:24:37.763 06:17:03 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # local i 00:24:37.763 06:17:03 nvmf_abort_qd_sizes -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:24:37.763 06:17:03 nvmf_abort_qd_sizes -- scripts/common.sh@25 -- # [[ -z '' ]] 00:24:37.763 06:17:03 nvmf_abort_qd_sizes -- scripts/common.sh@27 -- # return 0 00:24:37.763 06:17:03 nvmf_abort_qd_sizes -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:24:37.763 06:17:03 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:24:37.763 06:17:03 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:24:37.763 06:17:03 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:24:37.763 06:17:03 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:24:37.763 06:17:03 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:24:37.763 06:17:03 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:24:37.763 06:17:03 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:24:37.763 06:17:03 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:24:37.763 06:17:03 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:24:37.763 06:17:03 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:24:37.763 06:17:03 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 2 )) 00:24:37.763 06:17:03 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:24:37.763 06:17:03 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 2 > 0 )) 00:24:37.763 06:17:03 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:00:10.0 00:24:37.763 06:17:03 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:24:37.763 06:17:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:24:37.763 06:17:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:37.763 06:17:03 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:24:37.763 ************************************ 00:24:37.763 START TEST spdk_target_abort 00:24:37.763 ************************************ 00:24:37.763 06:17:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1125 -- # spdk_target 00:24:37.763 06:17:03 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:24:37.763 06:17:03 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:00:10.0 -b spdk_target 00:24:37.763 06:17:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:37.763 06:17:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:24:37.763 spdk_targetn1 00:24:37.763 06:17:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:37.763 06:17:03 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:37.763 06:17:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:37.763 06:17:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:24:37.763 [2024-10-01 06:17:03.273058] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:37.763 06:17:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:37.763 06:17:03 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:24:37.763 06:17:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:37.763 06:17:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:24:37.763 06:17:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:37.763 06:17:03 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:24:37.763 06:17:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:37.763 06:17:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:24:37.763 06:17:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:37.763 06:17:03 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.3 -s 4420 00:24:37.763 06:17:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:37.763 06:17:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:24:37.763 [2024-10-01 06:17:03.305247] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:24:37.763 06:17:03 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:37.763 06:17:03 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.3 4420 nqn.2016-06.io.spdk:testnqn 00:24:37.763 06:17:03 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:24:37.763 06:17:03 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:24:37.763 06:17:03 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.3 00:24:37.763 06:17:03 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:24:37.763 06:17:03 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:24:37.763 06:17:03 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:24:37.763 06:17:03 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:24:37.763 06:17:03 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:24:37.763 06:17:03 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:24:37.763 06:17:03 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:24:37.763 06:17:03 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:24:37.763 06:17:03 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:24:37.763 06:17:03 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:24:37.763 06:17:03 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3' 00:24:37.763 06:17:03 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:24:37.763 06:17:03 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:24:37.763 06:17:03 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:24:37.764 06:17:03 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:24:37.764 06:17:03 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:24:37.764 06:17:03 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:24:41.094 Initializing NVMe Controllers 00:24:41.094 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 00:24:41.094 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:24:41.094 Initialization complete. Launching workers. 00:24:41.094 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 9904, failed: 0 00:24:41.094 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1046, failed to submit 8858 00:24:41.094 success 682, unsuccessful 364, failed 0 00:24:41.094 06:17:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:24:41.094 06:17:06 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:24:44.383 Initializing NVMe Controllers 00:24:44.383 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 00:24:44.383 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:24:44.383 Initialization complete. Launching workers. 00:24:44.383 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8762, failed: 0 00:24:44.383 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1155, failed to submit 7607 00:24:44.383 success 360, unsuccessful 795, failed 0 00:24:44.383 06:17:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:24:44.383 06:17:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:24:47.674 Initializing NVMe Controllers 00:24:47.674 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 00:24:47.674 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:24:47.674 Initialization complete. Launching workers. 00:24:47.674 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 30936, failed: 0 00:24:47.674 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2311, failed to submit 28625 00:24:47.674 success 458, unsuccessful 1853, failed 0 00:24:47.674 06:17:13 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:24:47.674 06:17:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:47.674 06:17:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:24:47.674 06:17:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:47.674 06:17:13 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:24:47.674 06:17:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:47.674 06:17:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:24:47.933 06:17:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:47.933 06:17:13 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 98530 00:24:47.933 06:17:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@950 -- # '[' -z 98530 ']' 00:24:47.933 06:17:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # kill -0 98530 00:24:47.933 06:17:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@955 -- # uname 00:24:47.933 06:17:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:47.933 06:17:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 98530 00:24:47.933 killing process with pid 98530 00:24:47.933 06:17:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:47.933 06:17:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:47.933 06:17:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 98530' 00:24:47.933 06:17:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@969 -- # kill 98530 00:24:47.933 06:17:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@974 -- # wait 98530 00:24:48.193 00:24:48.193 real 0m10.357s 00:24:48.193 user 0m39.409s 00:24:48.193 sys 0m2.172s 00:24:48.193 06:17:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:48.193 06:17:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:24:48.193 ************************************ 00:24:48.193 END TEST spdk_target_abort 00:24:48.193 ************************************ 00:24:48.193 06:17:13 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:24:48.193 06:17:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:24:48.193 06:17:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:48.193 06:17:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:24:48.193 ************************************ 00:24:48.193 START TEST kernel_target_abort 00:24:48.193 ************************************ 00:24:48.193 06:17:13 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1125 -- # kernel_target 00:24:48.193 06:17:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:24:48.193 06:17:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@765 -- # local ip 00:24:48.193 06:17:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@766 -- # ip_candidates=() 00:24:48.193 06:17:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@766 -- # local -A ip_candidates 00:24:48.193 06:17:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@768 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:48.193 06:17:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:48.193 06:17:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@771 -- # [[ -z tcp ]] 00:24:48.193 06:17:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@771 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:48.193 06:17:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip=NVMF_INITIATOR_IP 00:24:48.193 06:17:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@774 -- # [[ -z 10.0.0.1 ]] 00:24:48.193 06:17:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@779 -- # echo 10.0.0.1 00:24:48.193 06:17:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:24:48.193 06:17:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:24:48.193 06:17:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # nvmet=/sys/kernel/config/nvmet 00:24:48.193 06:17:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:48.193 06:17:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:24:48.193 06:17:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@661 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:24:48.193 06:17:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # local block nvme 00:24:48.193 06:17:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # [[ ! -e /sys/module/nvmet ]] 00:24:48.193 06:17:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@666 -- # modprobe nvmet 00:24:48.193 06:17:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ -e /sys/kernel/config/nvmet ]] 00:24:48.193 06:17:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:24:48.452 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:24:48.452 Waiting for block devices as requested 00:24:48.452 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:24:48.712 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:24:48.712 06:17:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # for block in /sys/block/nvme* 00:24:48.712 06:17:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # [[ -e /sys/block/nvme0n1 ]] 00:24:48.712 06:17:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@676 -- # is_block_zoned nvme0n1 00:24:48.712 06:17:14 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:24:48.712 06:17:14 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:24:48.712 06:17:14 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:24:48.712 06:17:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # block_in_use nvme0n1 00:24:48.712 06:17:14 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:24:48.713 06:17:14 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:24:48.713 No valid GPT data, bailing 00:24:48.713 06:17:14 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:24:48.713 06:17:14 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:24:48.713 06:17:14 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:24:48.713 06:17:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # nvme=/dev/nvme0n1 00:24:48.713 06:17:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # for block in /sys/block/nvme* 00:24:48.713 06:17:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # [[ -e /sys/block/nvme0n2 ]] 00:24:48.713 06:17:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@676 -- # is_block_zoned nvme0n2 00:24:48.713 06:17:14 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1648 -- # local device=nvme0n2 00:24:48.713 06:17:14 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:24:48.713 06:17:14 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:24:48.713 06:17:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # block_in_use nvme0n2 00:24:48.713 06:17:14 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:24:48.713 06:17:14 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:24:48.972 No valid GPT data, bailing 00:24:48.972 06:17:14 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:24:48.972 06:17:14 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:24:48.972 06:17:14 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:24:48.972 06:17:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # nvme=/dev/nvme0n2 00:24:48.972 06:17:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # for block in /sys/block/nvme* 00:24:48.972 06:17:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # [[ -e /sys/block/nvme0n3 ]] 00:24:48.972 06:17:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@676 -- # is_block_zoned nvme0n3 00:24:48.972 06:17:14 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1648 -- # local device=nvme0n3 00:24:48.972 06:17:14 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:24:48.972 06:17:14 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:24:48.972 06:17:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # block_in_use nvme0n3 00:24:48.972 06:17:14 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:24:48.972 06:17:14 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:24:48.972 No valid GPT data, bailing 00:24:48.973 06:17:14 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:24:48.973 06:17:14 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:24:48.973 06:17:14 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:24:48.973 06:17:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # nvme=/dev/nvme0n3 00:24:48.973 06:17:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # for block in /sys/block/nvme* 00:24:48.973 06:17:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # [[ -e /sys/block/nvme1n1 ]] 00:24:48.973 06:17:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@676 -- # is_block_zoned nvme1n1 00:24:48.973 06:17:14 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:24:48.973 06:17:14 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:24:48.973 06:17:14 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:24:48.973 06:17:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # block_in_use nvme1n1 00:24:48.973 06:17:14 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:24:48.973 06:17:14 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:24:48.973 No valid GPT data, bailing 00:24:48.973 06:17:14 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:24:48.973 06:17:14 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:24:48.973 06:17:14 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:24:48.973 06:17:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # nvme=/dev/nvme1n1 00:24:48.973 06:17:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # [[ -b /dev/nvme1n1 ]] 00:24:48.973 06:17:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@682 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:48.973 06:17:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@683 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:24:48.973 06:17:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:24:48.973 06:17:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:24:48.973 06:17:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # echo 1 00:24:48.973 06:17:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@692 -- # echo /dev/nvme1n1 00:24:48.973 06:17:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo 1 00:24:48.973 06:17:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 10.0.0.1 00:24:48.973 06:17:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo tcp 00:24:48.973 06:17:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 4420 00:24:48.973 06:17:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # echo ipv4 00:24:48.973 06:17:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:24:48.973 06:17:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@704 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 --hostid=a979a798-a221-4879-b3c4-5aaa753fde06 -a 10.0.0.1 -t tcp -s 4420 00:24:48.973 00:24:48.973 Discovery Log Number of Records 2, Generation counter 2 00:24:48.973 =====Discovery Log Entry 0====== 00:24:48.973 trtype: tcp 00:24:48.973 adrfam: ipv4 00:24:48.973 subtype: current discovery subsystem 00:24:48.973 treq: not specified, sq flow control disable supported 00:24:48.973 portid: 1 00:24:48.973 trsvcid: 4420 00:24:48.973 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:24:48.973 traddr: 10.0.0.1 00:24:48.973 eflags: none 00:24:48.973 sectype: none 00:24:48.973 =====Discovery Log Entry 1====== 00:24:48.973 trtype: tcp 00:24:48.973 adrfam: ipv4 00:24:48.973 subtype: nvme subsystem 00:24:48.973 treq: not specified, sq flow control disable supported 00:24:48.973 portid: 1 00:24:48.973 trsvcid: 4420 00:24:48.973 subnqn: nqn.2016-06.io.spdk:testnqn 00:24:48.973 traddr: 10.0.0.1 00:24:48.973 eflags: none 00:24:48.973 sectype: none 00:24:48.973 06:17:14 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:24:48.973 06:17:14 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:24:48.973 06:17:14 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:24:48.973 06:17:14 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:24:48.973 06:17:14 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:24:48.973 06:17:14 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:24:48.973 06:17:14 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:24:48.973 06:17:14 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:24:48.973 06:17:14 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:24:48.973 06:17:14 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:24:48.973 06:17:14 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:24:48.973 06:17:14 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:24:48.973 06:17:14 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:24:48.973 06:17:14 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:24:48.973 06:17:14 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:24:48.973 06:17:14 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:24:48.973 06:17:14 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:24:48.973 06:17:14 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:24:48.973 06:17:14 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:24:48.973 06:17:14 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:24:48.973 06:17:14 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:24:52.267 Initializing NVMe Controllers 00:24:52.267 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:24:52.267 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:24:52.267 Initialization complete. Launching workers. 00:24:52.267 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 30344, failed: 0 00:24:52.267 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 30344, failed to submit 0 00:24:52.267 success 0, unsuccessful 30344, failed 0 00:24:52.267 06:17:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:24:52.267 06:17:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:24:55.556 Initializing NVMe Controllers 00:24:55.556 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:24:55.556 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:24:55.556 Initialization complete. Launching workers. 00:24:55.556 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 62247, failed: 0 00:24:55.556 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 24879, failed to submit 37368 00:24:55.556 success 0, unsuccessful 24879, failed 0 00:24:55.556 06:17:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:24:55.556 06:17:20 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:24:58.877 Initializing NVMe Controllers 00:24:58.877 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:24:58.877 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:24:58.877 Initialization complete. Launching workers. 00:24:58.877 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 67832, failed: 0 00:24:58.877 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 16954, failed to submit 50878 00:24:58.877 success 0, unsuccessful 16954, failed 0 00:24:58.877 06:17:24 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:24:58.877 06:17:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:24:58.877 06:17:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@710 -- # echo 0 00:24:58.877 06:17:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:58.877 06:17:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@713 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:24:58.877 06:17:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:24:58.877 06:17:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@715 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:58.877 06:17:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # modules=(/sys/module/nvmet/holders/*) 00:24:58.877 06:17:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # modprobe -r nvmet_tcp nvmet 00:24:58.877 06:17:24 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@722 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:24:59.454 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:25:00.022 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:25:00.022 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:25:00.022 00:25:00.022 real 0m12.018s 00:25:00.022 user 0m5.690s 00:25:00.022 sys 0m3.621s 00:25:00.022 06:17:25 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:00.022 06:17:25 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:25:00.022 ************************************ 00:25:00.022 END TEST kernel_target_abort 00:25:00.022 ************************************ 00:25:00.281 06:17:25 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:25:00.281 06:17:25 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:25:00.281 06:17:25 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # nvmfcleanup 00:25:00.281 06:17:25 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:25:00.281 06:17:25 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:00.281 06:17:25 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:25:00.281 06:17:25 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:00.281 06:17:25 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:00.281 rmmod nvme_tcp 00:25:00.281 rmmod nvme_fabrics 00:25:00.281 rmmod nvme_keyring 00:25:00.281 06:17:25 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:00.281 06:17:25 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:25:00.281 06:17:25 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:25:00.281 06:17:25 nvmf_abort_qd_sizes -- nvmf/common.sh@513 -- # '[' -n 98530 ']' 00:25:00.281 Process with pid 98530 is not found 00:25:00.281 06:17:25 nvmf_abort_qd_sizes -- nvmf/common.sh@514 -- # killprocess 98530 00:25:00.281 06:17:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@950 -- # '[' -z 98530 ']' 00:25:00.281 06:17:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # kill -0 98530 00:25:00.281 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (98530) - No such process 00:25:00.281 06:17:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@977 -- # echo 'Process with pid 98530 is not found' 00:25:00.281 06:17:25 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # '[' iso == iso ']' 00:25:00.281 06:17:25 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:25:00.539 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:25:00.539 Waiting for block devices as requested 00:25:00.799 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:25:00.799 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:25:00.799 06:17:26 nvmf_abort_qd_sizes -- nvmf/common.sh@519 -- # [[ tcp == \t\c\p ]] 00:25:00.799 06:17:26 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # nvmf_tcp_fini 00:25:00.799 06:17:26 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:25:00.799 06:17:26 nvmf_abort_qd_sizes -- nvmf/common.sh@787 -- # iptables-save 00:25:00.799 06:17:26 nvmf_abort_qd_sizes -- nvmf/common.sh@787 -- # grep -v SPDK_NVMF 00:25:00.799 06:17:26 nvmf_abort_qd_sizes -- nvmf/common.sh@787 -- # iptables-restore 00:25:00.799 06:17:26 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:00.799 06:17:26 nvmf_abort_qd_sizes -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:25:00.799 06:17:26 nvmf_abort_qd_sizes -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:25:00.799 06:17:26 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:25:00.799 06:17:26 nvmf_abort_qd_sizes -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:25:00.799 06:17:26 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:25:01.058 06:17:26 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:25:01.058 06:17:26 nvmf_abort_qd_sizes -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:25:01.058 06:17:26 nvmf_abort_qd_sizes -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:25:01.058 06:17:26 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:25:01.058 06:17:26 nvmf_abort_qd_sizes -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:25:01.058 06:17:26 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:25:01.058 06:17:26 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:25:01.058 06:17:26 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:01.058 06:17:26 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:01.058 06:17:26 nvmf_abort_qd_sizes -- nvmf/common.sh@246 -- # remove_spdk_ns 00:25:01.058 06:17:26 nvmf_abort_qd_sizes -- nvmf/common.sh@652 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:01.058 06:17:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:25:01.058 06:17:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:01.058 06:17:26 nvmf_abort_qd_sizes -- nvmf/common.sh@300 -- # return 0 00:25:01.058 00:25:01.058 real 0m25.416s 00:25:01.058 user 0m46.312s 00:25:01.058 sys 0m7.205s 00:25:01.058 06:17:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:01.058 06:17:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:25:01.058 ************************************ 00:25:01.058 END TEST nvmf_abort_qd_sizes 00:25:01.058 ************************************ 00:25:01.319 06:17:26 -- spdk/autotest.sh@288 -- # run_test keyring_file /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:25:01.319 06:17:26 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:25:01.319 06:17:26 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:01.319 06:17:26 -- common/autotest_common.sh@10 -- # set +x 00:25:01.319 ************************************ 00:25:01.319 START TEST keyring_file 00:25:01.319 ************************************ 00:25:01.319 06:17:26 keyring_file -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:25:01.319 * Looking for test storage... 00:25:01.319 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:25:01.319 06:17:26 keyring_file -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:25:01.319 06:17:26 keyring_file -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:25:01.319 06:17:26 keyring_file -- common/autotest_common.sh@1681 -- # lcov --version 00:25:01.319 06:17:26 keyring_file -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:25:01.319 06:17:26 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:01.319 06:17:26 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:01.319 06:17:26 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:01.319 06:17:26 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:25:01.319 06:17:26 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:25:01.319 06:17:26 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:25:01.319 06:17:26 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:25:01.319 06:17:26 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:25:01.319 06:17:26 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:25:01.319 06:17:26 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:25:01.319 06:17:26 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:01.319 06:17:26 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:25:01.319 06:17:26 keyring_file -- scripts/common.sh@345 -- # : 1 00:25:01.319 06:17:26 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:01.319 06:17:26 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:01.319 06:17:26 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:25:01.319 06:17:26 keyring_file -- scripts/common.sh@353 -- # local d=1 00:25:01.319 06:17:26 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:01.319 06:17:26 keyring_file -- scripts/common.sh@355 -- # echo 1 00:25:01.319 06:17:26 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:25:01.319 06:17:26 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:25:01.319 06:17:26 keyring_file -- scripts/common.sh@353 -- # local d=2 00:25:01.319 06:17:26 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:01.319 06:17:26 keyring_file -- scripts/common.sh@355 -- # echo 2 00:25:01.319 06:17:26 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:25:01.319 06:17:26 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:01.319 06:17:26 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:01.319 06:17:26 keyring_file -- scripts/common.sh@368 -- # return 0 00:25:01.319 06:17:26 keyring_file -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:01.319 06:17:26 keyring_file -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:25:01.319 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:01.319 --rc genhtml_branch_coverage=1 00:25:01.319 --rc genhtml_function_coverage=1 00:25:01.319 --rc genhtml_legend=1 00:25:01.319 --rc geninfo_all_blocks=1 00:25:01.319 --rc geninfo_unexecuted_blocks=1 00:25:01.319 00:25:01.319 ' 00:25:01.319 06:17:26 keyring_file -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:25:01.319 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:01.319 --rc genhtml_branch_coverage=1 00:25:01.319 --rc genhtml_function_coverage=1 00:25:01.319 --rc genhtml_legend=1 00:25:01.319 --rc geninfo_all_blocks=1 00:25:01.319 --rc geninfo_unexecuted_blocks=1 00:25:01.319 00:25:01.319 ' 00:25:01.319 06:17:26 keyring_file -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:25:01.319 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:01.319 --rc genhtml_branch_coverage=1 00:25:01.319 --rc genhtml_function_coverage=1 00:25:01.319 --rc genhtml_legend=1 00:25:01.319 --rc geninfo_all_blocks=1 00:25:01.319 --rc geninfo_unexecuted_blocks=1 00:25:01.319 00:25:01.319 ' 00:25:01.319 06:17:26 keyring_file -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:25:01.319 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:01.319 --rc genhtml_branch_coverage=1 00:25:01.319 --rc genhtml_function_coverage=1 00:25:01.319 --rc genhtml_legend=1 00:25:01.319 --rc geninfo_all_blocks=1 00:25:01.319 --rc geninfo_unexecuted_blocks=1 00:25:01.319 00:25:01.319 ' 00:25:01.319 06:17:26 keyring_file -- keyring/file.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:25:01.319 06:17:26 keyring_file -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:01.319 06:17:26 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:25:01.319 06:17:26 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:01.319 06:17:26 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:01.319 06:17:26 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:01.319 06:17:26 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:01.319 06:17:26 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:01.319 06:17:26 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:01.319 06:17:26 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:01.319 06:17:26 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:01.319 06:17:26 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:01.319 06:17:26 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:01.319 06:17:26 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 00:25:01.319 06:17:26 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=a979a798-a221-4879-b3c4-5aaa753fde06 00:25:01.319 06:17:26 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:01.319 06:17:26 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:01.319 06:17:26 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:25:01.319 06:17:26 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:01.319 06:17:26 keyring_file -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:01.319 06:17:26 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:25:01.319 06:17:26 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:01.319 06:17:26 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:01.319 06:17:26 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:01.319 06:17:26 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:01.319 06:17:26 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:01.319 06:17:26 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:01.319 06:17:26 keyring_file -- paths/export.sh@5 -- # export PATH 00:25:01.319 06:17:26 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:01.319 06:17:26 keyring_file -- nvmf/common.sh@51 -- # : 0 00:25:01.319 06:17:26 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:01.319 06:17:26 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:01.319 06:17:26 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:01.319 06:17:26 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:01.319 06:17:26 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:01.319 06:17:26 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:01.319 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:01.319 06:17:26 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:01.319 06:17:26 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:01.319 06:17:26 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:01.319 06:17:26 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:25:01.319 06:17:26 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:25:01.319 06:17:26 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:25:01.319 06:17:26 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:25:01.319 06:17:26 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:25:01.319 06:17:26 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:25:01.319 06:17:26 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:25:01.319 06:17:26 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:25:01.319 06:17:26 keyring_file -- keyring/common.sh@17 -- # name=key0 00:25:01.319 06:17:26 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:25:01.319 06:17:26 keyring_file -- keyring/common.sh@17 -- # digest=0 00:25:01.319 06:17:26 keyring_file -- keyring/common.sh@18 -- # mktemp 00:25:01.319 06:17:26 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.en3coaIml8 00:25:01.319 06:17:26 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:25:01.319 06:17:26 keyring_file -- nvmf/common.sh@739 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:25:01.319 06:17:26 keyring_file -- nvmf/common.sh@726 -- # local prefix key digest 00:25:01.319 06:17:26 keyring_file -- nvmf/common.sh@728 -- # prefix=NVMeTLSkey-1 00:25:01.319 06:17:26 keyring_file -- nvmf/common.sh@728 -- # key=00112233445566778899aabbccddeeff 00:25:01.319 06:17:26 keyring_file -- nvmf/common.sh@728 -- # digest=0 00:25:01.320 06:17:26 keyring_file -- nvmf/common.sh@729 -- # python - 00:25:01.579 06:17:26 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.en3coaIml8 00:25:01.579 06:17:26 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.en3coaIml8 00:25:01.579 06:17:26 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.en3coaIml8 00:25:01.579 06:17:26 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:25:01.579 06:17:26 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:25:01.579 06:17:26 keyring_file -- keyring/common.sh@17 -- # name=key1 00:25:01.579 06:17:26 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:25:01.579 06:17:26 keyring_file -- keyring/common.sh@17 -- # digest=0 00:25:01.579 06:17:26 keyring_file -- keyring/common.sh@18 -- # mktemp 00:25:01.579 06:17:26 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.VVvjZDstt8 00:25:01.579 06:17:26 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:25:01.579 06:17:26 keyring_file -- nvmf/common.sh@739 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:25:01.579 06:17:26 keyring_file -- nvmf/common.sh@726 -- # local prefix key digest 00:25:01.579 06:17:26 keyring_file -- nvmf/common.sh@728 -- # prefix=NVMeTLSkey-1 00:25:01.579 06:17:26 keyring_file -- nvmf/common.sh@728 -- # key=112233445566778899aabbccddeeff00 00:25:01.579 06:17:26 keyring_file -- nvmf/common.sh@728 -- # digest=0 00:25:01.579 06:17:26 keyring_file -- nvmf/common.sh@729 -- # python - 00:25:01.579 06:17:27 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.VVvjZDstt8 00:25:01.579 06:17:27 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.VVvjZDstt8 00:25:01.579 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:01.579 06:17:27 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.VVvjZDstt8 00:25:01.579 06:17:27 keyring_file -- keyring/file.sh@30 -- # tgtpid=99431 00:25:01.579 06:17:27 keyring_file -- keyring/file.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:01.579 06:17:27 keyring_file -- keyring/file.sh@32 -- # waitforlisten 99431 00:25:01.579 06:17:27 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 99431 ']' 00:25:01.579 06:17:27 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:01.579 06:17:27 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:01.579 06:17:27 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:01.579 06:17:27 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:01.579 06:17:27 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:25:01.579 [2024-10-01 06:17:27.104857] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:25:01.579 [2024-10-01 06:17:27.104974] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99431 ] 00:25:01.838 [2024-10-01 06:17:27.241600] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:01.838 [2024-10-01 06:17:27.280068] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:25:01.838 [2024-10-01 06:17:27.317929] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:25:01.839 06:17:27 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:01.839 06:17:27 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:25:01.839 06:17:27 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:25:01.839 06:17:27 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:01.839 06:17:27 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:25:01.839 [2024-10-01 06:17:27.445173] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:02.098 null0 00:25:02.098 [2024-10-01 06:17:27.477141] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:25:02.098 [2024-10-01 06:17:27.477329] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:25:02.098 06:17:27 keyring_file -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:02.098 06:17:27 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:25:02.098 06:17:27 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:25:02.098 06:17:27 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:25:02.098 06:17:27 keyring_file -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:25:02.098 06:17:27 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:02.098 06:17:27 keyring_file -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:25:02.098 06:17:27 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:02.098 06:17:27 keyring_file -- common/autotest_common.sh@653 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:25:02.098 06:17:27 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:02.098 06:17:27 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:25:02.098 [2024-10-01 06:17:27.509166] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:25:02.098 request: 00:25:02.098 { 00:25:02.098 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:25:02.098 "secure_channel": false, 00:25:02.098 "listen_address": { 00:25:02.098 "trtype": "tcp", 00:25:02.098 "traddr": "127.0.0.1", 00:25:02.098 "trsvcid": "4420" 00:25:02.098 }, 00:25:02.098 "method": "nvmf_subsystem_add_listener", 00:25:02.098 "req_id": 1 00:25:02.098 } 00:25:02.098 Got JSON-RPC error response 00:25:02.098 response: 00:25:02.098 { 00:25:02.098 "code": -32602, 00:25:02.098 "message": "Invalid parameters" 00:25:02.098 } 00:25:02.098 06:17:27 keyring_file -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:25:02.098 06:17:27 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:25:02.098 06:17:27 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:02.098 06:17:27 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:02.098 06:17:27 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:02.098 06:17:27 keyring_file -- keyring/file.sh@47 -- # bperfpid=99436 00:25:02.098 06:17:27 keyring_file -- keyring/file.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:25:02.098 06:17:27 keyring_file -- keyring/file.sh@49 -- # waitforlisten 99436 /var/tmp/bperf.sock 00:25:02.098 06:17:27 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 99436 ']' 00:25:02.098 06:17:27 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:02.098 06:17:27 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:02.098 06:17:27 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:02.098 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:02.098 06:17:27 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:02.098 06:17:27 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:25:02.098 [2024-10-01 06:17:27.576413] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:25:02.098 [2024-10-01 06:17:27.576520] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99436 ] 00:25:02.357 [2024-10-01 06:17:27.716451] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:02.357 [2024-10-01 06:17:27.759207] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:25:02.357 [2024-10-01 06:17:27.792929] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:25:02.357 06:17:27 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:02.357 06:17:27 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:25:02.357 06:17:27 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.en3coaIml8 00:25:02.357 06:17:27 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.en3coaIml8 00:25:02.617 06:17:28 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.VVvjZDstt8 00:25:02.617 06:17:28 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.VVvjZDstt8 00:25:02.875 06:17:28 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:25:02.875 06:17:28 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:25:02.875 06:17:28 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:25:02.875 06:17:28 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:02.875 06:17:28 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:03.134 06:17:28 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.en3coaIml8 == \/\t\m\p\/\t\m\p\.\e\n\3\c\o\a\I\m\l\8 ]] 00:25:03.134 06:17:28 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:25:03.134 06:17:28 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:25:03.134 06:17:28 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:03.134 06:17:28 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:03.134 06:17:28 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:25:03.393 06:17:28 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.VVvjZDstt8 == \/\t\m\p\/\t\m\p\.\V\V\v\j\Z\D\s\t\t\8 ]] 00:25:03.393 06:17:28 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:25:03.393 06:17:28 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:25:03.393 06:17:28 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:25:03.393 06:17:28 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:03.393 06:17:28 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:03.393 06:17:28 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:25:03.651 06:17:29 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:25:03.651 06:17:29 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:25:03.651 06:17:29 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:25:03.651 06:17:29 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:25:03.651 06:17:29 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:03.651 06:17:29 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:03.651 06:17:29 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:25:03.910 06:17:29 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:25:03.910 06:17:29 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:25:03.910 06:17:29 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:25:04.168 [2024-10-01 06:17:29.695633] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:04.168 nvme0n1 00:25:04.427 06:17:29 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:25:04.427 06:17:29 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:25:04.427 06:17:29 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:25:04.427 06:17:29 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:04.427 06:17:29 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:25:04.427 06:17:29 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:04.685 06:17:30 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:25:04.685 06:17:30 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:25:04.685 06:17:30 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:25:04.685 06:17:30 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:25:04.686 06:17:30 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:04.686 06:17:30 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:04.686 06:17:30 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:25:04.945 06:17:30 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:25:04.945 06:17:30 keyring_file -- keyring/file.sh@63 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:04.945 Running I/O for 1 seconds... 00:25:05.883 12049.00 IOPS, 47.07 MiB/s 00:25:05.883 Latency(us) 00:25:05.883 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:05.883 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:25:05.883 nvme0n1 : 1.01 12098.57 47.26 0.00 0.00 10552.58 4379.00 23354.65 00:25:05.883 =================================================================================================================== 00:25:05.883 Total : 12098.57 47.26 0.00 0.00 10552.58 4379.00 23354.65 00:25:05.883 { 00:25:05.883 "results": [ 00:25:05.883 { 00:25:05.883 "job": "nvme0n1", 00:25:05.883 "core_mask": "0x2", 00:25:05.883 "workload": "randrw", 00:25:05.883 "percentage": 50, 00:25:05.883 "status": "finished", 00:25:05.883 "queue_depth": 128, 00:25:05.883 "io_size": 4096, 00:25:05.883 "runtime": 1.006565, 00:25:05.883 "iops": 12098.572869114265, 00:25:05.883 "mibps": 47.2600502699776, 00:25:05.883 "io_failed": 0, 00:25:05.883 "io_timeout": 0, 00:25:05.883 "avg_latency_us": 10552.57753265949, 00:25:05.883 "min_latency_us": 4378.996363636364, 00:25:05.883 "max_latency_us": 23354.647272727274 00:25:05.883 } 00:25:05.883 ], 00:25:05.883 "core_count": 1 00:25:05.883 } 00:25:06.142 06:17:31 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:25:06.142 06:17:31 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:25:06.402 06:17:31 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:25:06.402 06:17:31 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:25:06.402 06:17:31 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:25:06.402 06:17:31 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:06.402 06:17:31 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:06.402 06:17:31 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:25:06.661 06:17:32 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:25:06.662 06:17:32 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:25:06.662 06:17:32 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:25:06.662 06:17:32 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:25:06.662 06:17:32 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:06.662 06:17:32 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:06.662 06:17:32 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:25:06.921 06:17:32 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:25:06.921 06:17:32 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:25:06.921 06:17:32 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:25:06.921 06:17:32 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:25:06.921 06:17:32 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:25:06.921 06:17:32 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:06.921 06:17:32 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:25:06.921 06:17:32 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:06.921 06:17:32 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:25:06.921 06:17:32 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:25:07.181 [2024-10-01 06:17:32.632006] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:25:07.181 [2024-10-01 06:17:32.632614] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfe0320 (107): Transport endpoint is not connected 00:25:07.181 [2024-10-01 06:17:32.633601] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfe0320 (9): Bad file descriptor 00:25:07.181 [2024-10-01 06:17:32.634598] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:07.181 [2024-10-01 06:17:32.634631] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:25:07.181 [2024-10-01 06:17:32.634656] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:25:07.181 [2024-10-01 06:17:32.634665] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:07.181 request: 00:25:07.181 { 00:25:07.181 "name": "nvme0", 00:25:07.181 "trtype": "tcp", 00:25:07.181 "traddr": "127.0.0.1", 00:25:07.181 "adrfam": "ipv4", 00:25:07.181 "trsvcid": "4420", 00:25:07.181 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:07.181 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:07.181 "prchk_reftag": false, 00:25:07.181 "prchk_guard": false, 00:25:07.181 "hdgst": false, 00:25:07.181 "ddgst": false, 00:25:07.181 "psk": "key1", 00:25:07.181 "allow_unrecognized_csi": false, 00:25:07.181 "method": "bdev_nvme_attach_controller", 00:25:07.181 "req_id": 1 00:25:07.181 } 00:25:07.181 Got JSON-RPC error response 00:25:07.181 response: 00:25:07.181 { 00:25:07.181 "code": -5, 00:25:07.181 "message": "Input/output error" 00:25:07.181 } 00:25:07.181 06:17:32 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:25:07.181 06:17:32 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:07.181 06:17:32 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:07.181 06:17:32 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:07.181 06:17:32 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:25:07.181 06:17:32 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:25:07.181 06:17:32 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:25:07.181 06:17:32 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:07.181 06:17:32 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:07.181 06:17:32 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:25:07.440 06:17:32 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:25:07.440 06:17:32 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:25:07.440 06:17:32 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:25:07.440 06:17:32 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:25:07.440 06:17:32 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:07.440 06:17:32 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:25:07.440 06:17:32 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:07.699 06:17:33 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:25:07.699 06:17:33 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:25:07.699 06:17:33 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:25:07.958 06:17:33 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:25:07.958 06:17:33 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:25:08.218 06:17:33 keyring_file -- keyring/file.sh@78 -- # jq length 00:25:08.218 06:17:33 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:25:08.218 06:17:33 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:08.477 06:17:34 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:25:08.477 06:17:34 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.en3coaIml8 00:25:08.477 06:17:34 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.en3coaIml8 00:25:08.477 06:17:34 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:25:08.477 06:17:34 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.en3coaIml8 00:25:08.477 06:17:34 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:25:08.477 06:17:34 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:08.477 06:17:34 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:25:08.477 06:17:34 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:08.477 06:17:34 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.en3coaIml8 00:25:08.477 06:17:34 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.en3coaIml8 00:25:08.736 [2024-10-01 06:17:34.313271] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.en3coaIml8': 0100660 00:25:08.736 [2024-10-01 06:17:34.313323] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:25:08.736 request: 00:25:08.736 { 00:25:08.736 "name": "key0", 00:25:08.736 "path": "/tmp/tmp.en3coaIml8", 00:25:08.736 "method": "keyring_file_add_key", 00:25:08.736 "req_id": 1 00:25:08.736 } 00:25:08.736 Got JSON-RPC error response 00:25:08.736 response: 00:25:08.736 { 00:25:08.736 "code": -1, 00:25:08.736 "message": "Operation not permitted" 00:25:08.736 } 00:25:08.736 06:17:34 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:25:08.736 06:17:34 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:08.736 06:17:34 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:08.736 06:17:34 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:08.736 06:17:34 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.en3coaIml8 00:25:08.736 06:17:34 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.en3coaIml8 00:25:08.736 06:17:34 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.en3coaIml8 00:25:08.995 06:17:34 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.en3coaIml8 00:25:08.995 06:17:34 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:25:08.995 06:17:34 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:25:08.995 06:17:34 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:25:08.995 06:17:34 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:08.995 06:17:34 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:08.995 06:17:34 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:25:09.563 06:17:34 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:25:09.563 06:17:34 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:25:09.563 06:17:34 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:25:09.563 06:17:34 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:25:09.563 06:17:34 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:25:09.563 06:17:34 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:09.563 06:17:34 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:25:09.563 06:17:34 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:09.563 06:17:34 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:25:09.563 06:17:34 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:25:09.563 [2024-10-01 06:17:35.137528] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.en3coaIml8': No such file or directory 00:25:09.563 [2024-10-01 06:17:35.137577] nvme_tcp.c:2609:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:25:09.563 [2024-10-01 06:17:35.137611] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:25:09.563 [2024-10-01 06:17:35.137619] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:25:09.563 [2024-10-01 06:17:35.137628] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:25:09.563 [2024-10-01 06:17:35.137635] bdev_nvme.c:6447:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:25:09.563 request: 00:25:09.563 { 00:25:09.563 "name": "nvme0", 00:25:09.563 "trtype": "tcp", 00:25:09.563 "traddr": "127.0.0.1", 00:25:09.563 "adrfam": "ipv4", 00:25:09.563 "trsvcid": "4420", 00:25:09.563 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:09.563 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:09.563 "prchk_reftag": false, 00:25:09.563 "prchk_guard": false, 00:25:09.563 "hdgst": false, 00:25:09.563 "ddgst": false, 00:25:09.563 "psk": "key0", 00:25:09.563 "allow_unrecognized_csi": false, 00:25:09.563 "method": "bdev_nvme_attach_controller", 00:25:09.563 "req_id": 1 00:25:09.563 } 00:25:09.563 Got JSON-RPC error response 00:25:09.563 response: 00:25:09.563 { 00:25:09.563 "code": -19, 00:25:09.563 "message": "No such device" 00:25:09.563 } 00:25:09.563 06:17:35 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:25:09.563 06:17:35 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:09.563 06:17:35 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:09.563 06:17:35 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:09.563 06:17:35 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:25:09.563 06:17:35 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:25:09.822 06:17:35 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:25:09.822 06:17:35 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:25:09.822 06:17:35 keyring_file -- keyring/common.sh@17 -- # name=key0 00:25:09.822 06:17:35 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:25:09.822 06:17:35 keyring_file -- keyring/common.sh@17 -- # digest=0 00:25:09.822 06:17:35 keyring_file -- keyring/common.sh@18 -- # mktemp 00:25:09.822 06:17:35 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.KZCS9bCRZ7 00:25:09.822 06:17:35 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:25:09.822 06:17:35 keyring_file -- nvmf/common.sh@739 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:25:09.822 06:17:35 keyring_file -- nvmf/common.sh@726 -- # local prefix key digest 00:25:09.822 06:17:35 keyring_file -- nvmf/common.sh@728 -- # prefix=NVMeTLSkey-1 00:25:09.822 06:17:35 keyring_file -- nvmf/common.sh@728 -- # key=00112233445566778899aabbccddeeff 00:25:09.822 06:17:35 keyring_file -- nvmf/common.sh@728 -- # digest=0 00:25:09.822 06:17:35 keyring_file -- nvmf/common.sh@729 -- # python - 00:25:10.081 06:17:35 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.KZCS9bCRZ7 00:25:10.081 06:17:35 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.KZCS9bCRZ7 00:25:10.081 06:17:35 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.KZCS9bCRZ7 00:25:10.081 06:17:35 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.KZCS9bCRZ7 00:25:10.081 06:17:35 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.KZCS9bCRZ7 00:25:10.341 06:17:35 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:25:10.341 06:17:35 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:25:10.600 nvme0n1 00:25:10.600 06:17:36 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:25:10.600 06:17:36 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:25:10.600 06:17:36 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:25:10.600 06:17:36 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:10.600 06:17:36 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:10.600 06:17:36 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:25:10.858 06:17:36 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:25:10.858 06:17:36 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:25:10.858 06:17:36 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:25:11.118 06:17:36 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:25:11.118 06:17:36 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:25:11.118 06:17:36 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:11.118 06:17:36 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:11.118 06:17:36 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:25:11.377 06:17:36 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:25:11.377 06:17:36 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:25:11.377 06:17:36 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:25:11.377 06:17:36 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:25:11.377 06:17:36 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:11.377 06:17:36 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:11.377 06:17:36 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:25:11.635 06:17:37 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:25:11.635 06:17:37 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:25:11.635 06:17:37 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:25:11.893 06:17:37 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:25:11.893 06:17:37 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:11.893 06:17:37 keyring_file -- keyring/file.sh@105 -- # jq length 00:25:12.152 06:17:37 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:25:12.152 06:17:37 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.KZCS9bCRZ7 00:25:12.152 06:17:37 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.KZCS9bCRZ7 00:25:12.410 06:17:37 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.VVvjZDstt8 00:25:12.410 06:17:37 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.VVvjZDstt8 00:25:12.669 06:17:38 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:25:12.669 06:17:38 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:25:12.930 nvme0n1 00:25:12.930 06:17:38 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:25:12.930 06:17:38 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:25:13.499 06:17:38 keyring_file -- keyring/file.sh@113 -- # config='{ 00:25:13.499 "subsystems": [ 00:25:13.499 { 00:25:13.499 "subsystem": "keyring", 00:25:13.499 "config": [ 00:25:13.499 { 00:25:13.499 "method": "keyring_file_add_key", 00:25:13.499 "params": { 00:25:13.499 "name": "key0", 00:25:13.499 "path": "/tmp/tmp.KZCS9bCRZ7" 00:25:13.499 } 00:25:13.499 }, 00:25:13.499 { 00:25:13.499 "method": "keyring_file_add_key", 00:25:13.499 "params": { 00:25:13.499 "name": "key1", 00:25:13.499 "path": "/tmp/tmp.VVvjZDstt8" 00:25:13.499 } 00:25:13.499 } 00:25:13.499 ] 00:25:13.499 }, 00:25:13.499 { 00:25:13.499 "subsystem": "iobuf", 00:25:13.499 "config": [ 00:25:13.499 { 00:25:13.499 "method": "iobuf_set_options", 00:25:13.499 "params": { 00:25:13.499 "small_pool_count": 8192, 00:25:13.499 "large_pool_count": 1024, 00:25:13.499 "small_bufsize": 8192, 00:25:13.499 "large_bufsize": 135168 00:25:13.499 } 00:25:13.499 } 00:25:13.499 ] 00:25:13.499 }, 00:25:13.499 { 00:25:13.499 "subsystem": "sock", 00:25:13.499 "config": [ 00:25:13.499 { 00:25:13.499 "method": "sock_set_default_impl", 00:25:13.499 "params": { 00:25:13.499 "impl_name": "uring" 00:25:13.499 } 00:25:13.499 }, 00:25:13.499 { 00:25:13.499 "method": "sock_impl_set_options", 00:25:13.499 "params": { 00:25:13.499 "impl_name": "ssl", 00:25:13.499 "recv_buf_size": 4096, 00:25:13.499 "send_buf_size": 4096, 00:25:13.499 "enable_recv_pipe": true, 00:25:13.499 "enable_quickack": false, 00:25:13.499 "enable_placement_id": 0, 00:25:13.499 "enable_zerocopy_send_server": true, 00:25:13.499 "enable_zerocopy_send_client": false, 00:25:13.499 "zerocopy_threshold": 0, 00:25:13.499 "tls_version": 0, 00:25:13.499 "enable_ktls": false 00:25:13.499 } 00:25:13.499 }, 00:25:13.499 { 00:25:13.499 "method": "sock_impl_set_options", 00:25:13.499 "params": { 00:25:13.499 "impl_name": "posix", 00:25:13.499 "recv_buf_size": 2097152, 00:25:13.499 "send_buf_size": 2097152, 00:25:13.499 "enable_recv_pipe": true, 00:25:13.499 "enable_quickack": false, 00:25:13.499 "enable_placement_id": 0, 00:25:13.499 "enable_zerocopy_send_server": true, 00:25:13.499 "enable_zerocopy_send_client": false, 00:25:13.499 "zerocopy_threshold": 0, 00:25:13.499 "tls_version": 0, 00:25:13.499 "enable_ktls": false 00:25:13.499 } 00:25:13.499 }, 00:25:13.499 { 00:25:13.499 "method": "sock_impl_set_options", 00:25:13.499 "params": { 00:25:13.499 "impl_name": "uring", 00:25:13.499 "recv_buf_size": 2097152, 00:25:13.499 "send_buf_size": 2097152, 00:25:13.499 "enable_recv_pipe": true, 00:25:13.499 "enable_quickack": false, 00:25:13.499 "enable_placement_id": 0, 00:25:13.499 "enable_zerocopy_send_server": false, 00:25:13.499 "enable_zerocopy_send_client": false, 00:25:13.499 "zerocopy_threshold": 0, 00:25:13.499 "tls_version": 0, 00:25:13.499 "enable_ktls": false 00:25:13.499 } 00:25:13.499 } 00:25:13.499 ] 00:25:13.499 }, 00:25:13.499 { 00:25:13.499 "subsystem": "vmd", 00:25:13.499 "config": [] 00:25:13.499 }, 00:25:13.499 { 00:25:13.499 "subsystem": "accel", 00:25:13.499 "config": [ 00:25:13.499 { 00:25:13.499 "method": "accel_set_options", 00:25:13.499 "params": { 00:25:13.499 "small_cache_size": 128, 00:25:13.499 "large_cache_size": 16, 00:25:13.499 "task_count": 2048, 00:25:13.500 "sequence_count": 2048, 00:25:13.500 "buf_count": 2048 00:25:13.500 } 00:25:13.500 } 00:25:13.500 ] 00:25:13.500 }, 00:25:13.500 { 00:25:13.500 "subsystem": "bdev", 00:25:13.500 "config": [ 00:25:13.500 { 00:25:13.500 "method": "bdev_set_options", 00:25:13.500 "params": { 00:25:13.500 "bdev_io_pool_size": 65535, 00:25:13.500 "bdev_io_cache_size": 256, 00:25:13.500 "bdev_auto_examine": true, 00:25:13.500 "iobuf_small_cache_size": 128, 00:25:13.500 "iobuf_large_cache_size": 16 00:25:13.500 } 00:25:13.500 }, 00:25:13.500 { 00:25:13.500 "method": "bdev_raid_set_options", 00:25:13.500 "params": { 00:25:13.500 "process_window_size_kb": 1024, 00:25:13.500 "process_max_bandwidth_mb_sec": 0 00:25:13.500 } 00:25:13.500 }, 00:25:13.500 { 00:25:13.500 "method": "bdev_iscsi_set_options", 00:25:13.500 "params": { 00:25:13.500 "timeout_sec": 30 00:25:13.500 } 00:25:13.500 }, 00:25:13.500 { 00:25:13.500 "method": "bdev_nvme_set_options", 00:25:13.500 "params": { 00:25:13.500 "action_on_timeout": "none", 00:25:13.500 "timeout_us": 0, 00:25:13.500 "timeout_admin_us": 0, 00:25:13.500 "keep_alive_timeout_ms": 10000, 00:25:13.500 "arbitration_burst": 0, 00:25:13.500 "low_priority_weight": 0, 00:25:13.500 "medium_priority_weight": 0, 00:25:13.500 "high_priority_weight": 0, 00:25:13.500 "nvme_adminq_poll_period_us": 10000, 00:25:13.500 "nvme_ioq_poll_period_us": 0, 00:25:13.500 "io_queue_requests": 512, 00:25:13.500 "delay_cmd_submit": true, 00:25:13.500 "transport_retry_count": 4, 00:25:13.500 "bdev_retry_count": 3, 00:25:13.500 "transport_ack_timeout": 0, 00:25:13.500 "ctrlr_loss_timeout_sec": 0, 00:25:13.500 "reconnect_delay_sec": 0, 00:25:13.500 "fast_io_fail_timeout_sec": 0, 00:25:13.500 "disable_auto_failback": false, 00:25:13.500 "generate_uuids": false, 00:25:13.500 "transport_tos": 0, 00:25:13.500 "nvme_error_stat": false, 00:25:13.500 "rdma_srq_size": 0, 00:25:13.500 "io_path_stat": false, 00:25:13.500 "allow_accel_sequence": false, 00:25:13.500 "rdma_max_cq_size": 0, 00:25:13.500 "rdma_cm_event_timeout_ms": 0, 00:25:13.500 "dhchap_digests": [ 00:25:13.500 "sha256", 00:25:13.500 "sha384", 00:25:13.500 "sha512" 00:25:13.500 ], 00:25:13.500 "dhchap_dhgroups": [ 00:25:13.500 "null", 00:25:13.500 "ffdhe2048", 00:25:13.500 "ffdhe3072", 00:25:13.500 "ffdhe4096", 00:25:13.500 "ffdhe6144", 00:25:13.500 "ffdhe8192" 00:25:13.500 ] 00:25:13.500 } 00:25:13.500 }, 00:25:13.500 { 00:25:13.500 "method": "bdev_nvme_attach_controller", 00:25:13.500 "params": { 00:25:13.500 "name": "nvme0", 00:25:13.500 "trtype": "TCP", 00:25:13.500 "adrfam": "IPv4", 00:25:13.500 "traddr": "127.0.0.1", 00:25:13.500 "trsvcid": "4420", 00:25:13.500 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:13.500 "prchk_reftag": false, 00:25:13.500 "prchk_guard": false, 00:25:13.500 "ctrlr_loss_timeout_sec": 0, 00:25:13.500 "reconnect_delay_sec": 0, 00:25:13.500 "fast_io_fail_timeout_sec": 0, 00:25:13.500 "psk": "key0", 00:25:13.500 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:13.500 "hdgst": false, 00:25:13.500 "ddgst": false 00:25:13.500 } 00:25:13.500 }, 00:25:13.500 { 00:25:13.500 "method": "bdev_nvme_set_hotplug", 00:25:13.500 "params": { 00:25:13.500 "period_us": 100000, 00:25:13.500 "enable": false 00:25:13.500 } 00:25:13.500 }, 00:25:13.500 { 00:25:13.500 "method": "bdev_wait_for_examine" 00:25:13.500 } 00:25:13.500 ] 00:25:13.500 }, 00:25:13.500 { 00:25:13.500 "subsystem": "nbd", 00:25:13.500 "config": [] 00:25:13.500 } 00:25:13.500 ] 00:25:13.500 }' 00:25:13.500 06:17:38 keyring_file -- keyring/file.sh@115 -- # killprocess 99436 00:25:13.500 06:17:38 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 99436 ']' 00:25:13.500 06:17:38 keyring_file -- common/autotest_common.sh@954 -- # kill -0 99436 00:25:13.500 06:17:38 keyring_file -- common/autotest_common.sh@955 -- # uname 00:25:13.500 06:17:38 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:13.500 06:17:38 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 99436 00:25:13.500 killing process with pid 99436 00:25:13.500 Received shutdown signal, test time was about 1.000000 seconds 00:25:13.500 00:25:13.500 Latency(us) 00:25:13.500 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:13.500 =================================================================================================================== 00:25:13.500 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:13.500 06:17:38 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:25:13.500 06:17:38 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:25:13.500 06:17:38 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 99436' 00:25:13.500 06:17:38 keyring_file -- common/autotest_common.sh@969 -- # kill 99436 00:25:13.500 06:17:38 keyring_file -- common/autotest_common.sh@974 -- # wait 99436 00:25:13.500 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:13.500 06:17:39 keyring_file -- keyring/file.sh@118 -- # bperfpid=99685 00:25:13.500 06:17:39 keyring_file -- keyring/file.sh@116 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:25:13.500 06:17:39 keyring_file -- keyring/file.sh@120 -- # waitforlisten 99685 /var/tmp/bperf.sock 00:25:13.500 06:17:39 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 99685 ']' 00:25:13.500 06:17:39 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:25:13.500 "subsystems": [ 00:25:13.500 { 00:25:13.500 "subsystem": "keyring", 00:25:13.500 "config": [ 00:25:13.500 { 00:25:13.500 "method": "keyring_file_add_key", 00:25:13.500 "params": { 00:25:13.500 "name": "key0", 00:25:13.500 "path": "/tmp/tmp.KZCS9bCRZ7" 00:25:13.500 } 00:25:13.500 }, 00:25:13.500 { 00:25:13.500 "method": "keyring_file_add_key", 00:25:13.500 "params": { 00:25:13.500 "name": "key1", 00:25:13.500 "path": "/tmp/tmp.VVvjZDstt8" 00:25:13.500 } 00:25:13.500 } 00:25:13.500 ] 00:25:13.500 }, 00:25:13.500 { 00:25:13.500 "subsystem": "iobuf", 00:25:13.500 "config": [ 00:25:13.500 { 00:25:13.500 "method": "iobuf_set_options", 00:25:13.500 "params": { 00:25:13.500 "small_pool_count": 8192, 00:25:13.500 "large_pool_count": 1024, 00:25:13.500 "small_bufsize": 8192, 00:25:13.500 "large_bufsize": 135168 00:25:13.500 } 00:25:13.500 } 00:25:13.500 ] 00:25:13.500 }, 00:25:13.500 { 00:25:13.500 "subsystem": "sock", 00:25:13.500 "config": [ 00:25:13.500 { 00:25:13.500 "method": "sock_set_default_impl", 00:25:13.500 "params": { 00:25:13.500 "impl_name": "uring" 00:25:13.500 } 00:25:13.500 }, 00:25:13.500 { 00:25:13.500 "method": "sock_impl_set_options", 00:25:13.500 "params": { 00:25:13.500 "impl_name": "ssl", 00:25:13.500 "recv_buf_size": 4096, 00:25:13.500 "send_buf_size": 4096, 00:25:13.500 "enable_recv_pipe": true, 00:25:13.500 "enable_quickack": false, 00:25:13.500 "enable_placement_id": 0, 00:25:13.500 "enable_zerocopy_send_server": true, 00:25:13.500 "enable_zerocopy_send_client": false, 00:25:13.500 "zerocopy_threshold": 0, 00:25:13.500 "tls_version": 0, 00:25:13.500 "enable_ktls": false 00:25:13.500 } 00:25:13.500 }, 00:25:13.500 { 00:25:13.500 "method": "sock_impl_set_options", 00:25:13.500 "params": { 00:25:13.500 "impl_name": "posix", 00:25:13.500 "recv_buf_size": 2097152, 00:25:13.500 "send_buf_size": 2097152, 00:25:13.500 "enable_recv_pipe": true, 00:25:13.500 "enable_quickack": false, 00:25:13.500 "enable_placement_id": 0, 00:25:13.500 "enable_zerocopy_send_server": true, 00:25:13.500 "enable_zerocopy_send_client": false, 00:25:13.500 "zerocopy_threshold": 0, 00:25:13.500 "tls_version": 0, 00:25:13.500 "enable_ktls": false 00:25:13.500 } 00:25:13.500 }, 00:25:13.500 { 00:25:13.500 "method": "sock_impl_set_options", 00:25:13.500 "params": { 00:25:13.500 "impl_name": "uring", 00:25:13.500 "recv_buf_size": 2097152, 00:25:13.500 "send_buf_size": 2097152, 00:25:13.500 "enable_recv_pipe": true, 00:25:13.500 "enable_quickack": false, 00:25:13.500 "enable_placement_id": 0, 00:25:13.500 "enable_zerocopy_send_server": false, 00:25:13.500 "enable_zerocopy_send_client": false, 00:25:13.500 "zerocopy_threshold": 0, 00:25:13.500 "tls_version": 0, 00:25:13.501 "enable_ktls": false 00:25:13.501 } 00:25:13.501 } 00:25:13.501 ] 00:25:13.501 }, 00:25:13.501 { 00:25:13.501 "subsystem": "vmd", 00:25:13.501 "config": [] 00:25:13.501 }, 00:25:13.501 { 00:25:13.501 "subsystem": "accel", 00:25:13.501 "config": [ 00:25:13.501 { 00:25:13.501 "method": "accel_set_options", 00:25:13.501 "params": { 00:25:13.501 "small_cache_size": 128, 00:25:13.501 "large_cache_size": 16, 00:25:13.501 "task_count": 2048, 00:25:13.501 "sequence_count": 2048, 00:25:13.501 "buf_count": 2048 00:25:13.501 } 00:25:13.501 } 00:25:13.501 ] 00:25:13.501 }, 00:25:13.501 { 00:25:13.501 "subsystem": "bdev", 00:25:13.501 "config": [ 00:25:13.501 { 00:25:13.501 "method": "bdev_set_options", 00:25:13.501 "params": { 00:25:13.501 "bdev_io_pool_size": 65535, 00:25:13.501 "bdev_io_cache_size": 256, 00:25:13.501 "bdev_auto_examine": true, 00:25:13.501 "iobuf_small_cache_size": 128, 00:25:13.501 "iobuf_large_cache_size": 16 00:25:13.501 } 00:25:13.501 }, 00:25:13.501 { 00:25:13.501 "method": "bdev_raid_set_options", 00:25:13.501 "params": { 00:25:13.501 "process_window_size_kb": 1024, 00:25:13.501 "process_max_bandwidth_mb_sec": 0 00:25:13.501 } 00:25:13.501 }, 00:25:13.501 { 00:25:13.501 "method": "bdev_iscsi_set_options", 00:25:13.501 "params": { 00:25:13.501 "timeout_sec": 30 00:25:13.501 } 00:25:13.501 }, 00:25:13.501 { 00:25:13.501 "method": "bdev_nvme_set_options", 00:25:13.501 "params": { 00:25:13.501 "action_on_timeout": "none", 00:25:13.501 "timeout_us": 0, 00:25:13.501 "timeout_admin_us": 0, 00:25:13.501 "keep_alive_timeout_ms": 10000, 00:25:13.501 "arbitration_burst": 0, 00:25:13.501 "low_priority_weight": 0, 00:25:13.501 "medium_priority_weight": 0, 00:25:13.501 "high_priority_weight": 0, 00:25:13.501 "nvme_adminq_poll_period_us": 10000, 00:25:13.501 "nvme_ioq_poll_period_us": 0, 00:25:13.501 "io_queue_requests": 512, 00:25:13.501 "delay_cmd_submit": true, 00:25:13.501 "transport_retry_count": 4, 00:25:13.501 "bdev_retry_count": 3, 00:25:13.501 "transport_ack_timeout": 0, 00:25:13.501 "ctrlr_loss_timeout_sec": 0, 00:25:13.501 "reconnect_delay_sec": 0, 00:25:13.501 "fast_io_fail_timeout_sec": 0, 00:25:13.501 "disable_auto_failback": false, 00:25:13.501 "generate_uuids": false, 00:25:13.501 "transport_tos": 0, 00:25:13.501 "nvme_error_stat": false, 00:25:13.501 "rdma_srq_size": 0, 00:25:13.501 "io_path_stat": false, 00:25:13.501 "allow_accel_sequence": false, 00:25:13.501 "rdma_max_cq_size": 0, 00:25:13.501 "rdma_cm_event_timeout_ms": 0, 00:25:13.501 "dhchap_digests": [ 00:25:13.501 "sha256", 00:25:13.501 "sha384", 00:25:13.501 "sha512" 00:25:13.501 ], 00:25:13.501 "dhchap_dhgroups": [ 00:25:13.501 "null", 00:25:13.501 "ffdhe2048", 00:25:13.501 "ffdhe3072", 00:25:13.501 "ffdhe4096", 00:25:13.501 "ffdhe6144", 00:25:13.501 "ffdhe8192" 00:25:13.501 ] 00:25:13.501 } 00:25:13.501 }, 00:25:13.501 { 00:25:13.501 "method": "bdev_nvme_attach_controller", 00:25:13.501 "params": { 00:25:13.501 "name": "nvme0", 00:25:13.501 "trtype": "TCP", 00:25:13.501 "adrfam": "IPv4", 00:25:13.501 "traddr": "127.0.0.1", 00:25:13.501 "trsvcid": "4420", 00:25:13.501 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:13.501 "prchk_reftag": false, 00:25:13.501 "prchk_guard": false, 00:25:13.501 "ctrlr_loss_timeout_sec": 0, 00:25:13.501 "reconnect_delay_sec": 0, 00:25:13.501 "fast_io_fail_timeout_sec": 0, 00:25:13.501 "psk": "key0", 00:25:13.501 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:13.501 "hdgst": false, 00:25:13.501 "ddgst": false 00:25:13.501 } 00:25:13.501 }, 00:25:13.501 { 00:25:13.501 "method": "bdev_nvme_set_hotplug", 00:25:13.501 "params": { 00:25:13.501 "period_us": 100000, 00:25:13.501 "enable": false 00:25:13.501 } 00:25:13.501 }, 00:25:13.501 { 00:25:13.501 "method": "bdev_wait_for_examine" 00:25:13.501 } 00:25:13.501 ] 00:25:13.501 }, 00:25:13.501 { 00:25:13.501 "subsystem": "nbd", 00:25:13.501 "config": [] 00:25:13.501 } 00:25:13.501 ] 00:25:13.501 }' 00:25:13.501 06:17:39 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:13.501 06:17:39 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:13.501 06:17:39 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:13.501 06:17:39 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:13.501 06:17:39 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:25:13.501 [2024-10-01 06:17:39.048965] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:25:13.501 [2024-10-01 06:17:39.049079] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99685 ] 00:25:13.760 [2024-10-01 06:17:39.181512] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:13.760 [2024-10-01 06:17:39.216257] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:25:13.760 [2024-10-01 06:17:39.326677] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:25:13.760 [2024-10-01 06:17:39.363161] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:14.695 06:17:40 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:14.695 06:17:40 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:25:14.695 06:17:40 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:25:14.695 06:17:40 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:14.695 06:17:40 keyring_file -- keyring/file.sh@121 -- # jq length 00:25:14.954 06:17:40 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:25:14.954 06:17:40 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:25:14.954 06:17:40 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:25:14.954 06:17:40 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:25:14.954 06:17:40 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:14.954 06:17:40 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:14.954 06:17:40 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:25:15.212 06:17:40 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:25:15.212 06:17:40 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:25:15.212 06:17:40 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:25:15.212 06:17:40 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:25:15.212 06:17:40 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:15.212 06:17:40 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:15.212 06:17:40 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:25:15.471 06:17:40 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:25:15.471 06:17:40 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:25:15.471 06:17:40 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:25:15.472 06:17:40 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:25:15.731 06:17:41 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:25:15.731 06:17:41 keyring_file -- keyring/file.sh@1 -- # cleanup 00:25:15.731 06:17:41 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.KZCS9bCRZ7 /tmp/tmp.VVvjZDstt8 00:25:15.731 06:17:41 keyring_file -- keyring/file.sh@20 -- # killprocess 99685 00:25:15.731 06:17:41 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 99685 ']' 00:25:15.731 06:17:41 keyring_file -- common/autotest_common.sh@954 -- # kill -0 99685 00:25:15.731 06:17:41 keyring_file -- common/autotest_common.sh@955 -- # uname 00:25:15.731 06:17:41 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:15.731 06:17:41 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 99685 00:25:15.731 killing process with pid 99685 00:25:15.731 Received shutdown signal, test time was about 1.000000 seconds 00:25:15.731 00:25:15.731 Latency(us) 00:25:15.731 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:15.731 =================================================================================================================== 00:25:15.731 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:25:15.731 06:17:41 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:25:15.731 06:17:41 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:25:15.731 06:17:41 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 99685' 00:25:15.731 06:17:41 keyring_file -- common/autotest_common.sh@969 -- # kill 99685 00:25:15.731 06:17:41 keyring_file -- common/autotest_common.sh@974 -- # wait 99685 00:25:15.990 06:17:41 keyring_file -- keyring/file.sh@21 -- # killprocess 99431 00:25:15.990 06:17:41 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 99431 ']' 00:25:15.990 06:17:41 keyring_file -- common/autotest_common.sh@954 -- # kill -0 99431 00:25:15.990 06:17:41 keyring_file -- common/autotest_common.sh@955 -- # uname 00:25:15.990 06:17:41 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:15.990 06:17:41 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 99431 00:25:15.990 killing process with pid 99431 00:25:15.990 06:17:41 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:15.990 06:17:41 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:15.990 06:17:41 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 99431' 00:25:15.990 06:17:41 keyring_file -- common/autotest_common.sh@969 -- # kill 99431 00:25:15.990 06:17:41 keyring_file -- common/autotest_common.sh@974 -- # wait 99431 00:25:16.250 00:25:16.250 real 0m14.968s 00:25:16.250 user 0m38.915s 00:25:16.250 sys 0m2.694s 00:25:16.250 06:17:41 keyring_file -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:16.250 ************************************ 00:25:16.250 06:17:41 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:25:16.250 END TEST keyring_file 00:25:16.250 ************************************ 00:25:16.250 06:17:41 -- spdk/autotest.sh@289 -- # [[ y == y ]] 00:25:16.250 06:17:41 -- spdk/autotest.sh@290 -- # run_test keyring_linux /home/vagrant/spdk_repo/spdk/scripts/keyctl-session-wrapper /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:25:16.250 06:17:41 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:25:16.250 06:17:41 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:16.250 06:17:41 -- common/autotest_common.sh@10 -- # set +x 00:25:16.250 ************************************ 00:25:16.250 START TEST keyring_linux 00:25:16.250 ************************************ 00:25:16.250 06:17:41 keyring_linux -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/scripts/keyctl-session-wrapper /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:25:16.250 Joined session keyring: 683176767 00:25:16.250 * Looking for test storage... 00:25:16.250 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:25:16.250 06:17:41 keyring_linux -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:25:16.250 06:17:41 keyring_linux -- common/autotest_common.sh@1681 -- # lcov --version 00:25:16.250 06:17:41 keyring_linux -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:25:16.510 06:17:41 keyring_linux -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:25:16.510 06:17:41 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:16.510 06:17:41 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:16.510 06:17:41 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:16.510 06:17:41 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:25:16.510 06:17:41 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:25:16.510 06:17:41 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:25:16.510 06:17:41 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:25:16.510 06:17:41 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:25:16.510 06:17:41 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:25:16.510 06:17:41 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:25:16.510 06:17:41 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:16.510 06:17:41 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:25:16.510 06:17:41 keyring_linux -- scripts/common.sh@345 -- # : 1 00:25:16.510 06:17:41 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:16.510 06:17:41 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:16.510 06:17:41 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:25:16.510 06:17:41 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:25:16.510 06:17:41 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:16.510 06:17:41 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:25:16.510 06:17:41 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:25:16.510 06:17:41 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:25:16.510 06:17:41 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:25:16.510 06:17:41 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:16.510 06:17:41 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:25:16.510 06:17:41 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:25:16.510 06:17:41 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:16.510 06:17:41 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:16.510 06:17:41 keyring_linux -- scripts/common.sh@368 -- # return 0 00:25:16.510 06:17:41 keyring_linux -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:16.510 06:17:41 keyring_linux -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:25:16.510 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:16.510 --rc genhtml_branch_coverage=1 00:25:16.510 --rc genhtml_function_coverage=1 00:25:16.510 --rc genhtml_legend=1 00:25:16.510 --rc geninfo_all_blocks=1 00:25:16.510 --rc geninfo_unexecuted_blocks=1 00:25:16.510 00:25:16.510 ' 00:25:16.510 06:17:41 keyring_linux -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:25:16.510 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:16.510 --rc genhtml_branch_coverage=1 00:25:16.510 --rc genhtml_function_coverage=1 00:25:16.510 --rc genhtml_legend=1 00:25:16.510 --rc geninfo_all_blocks=1 00:25:16.510 --rc geninfo_unexecuted_blocks=1 00:25:16.510 00:25:16.510 ' 00:25:16.510 06:17:41 keyring_linux -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:25:16.510 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:16.510 --rc genhtml_branch_coverage=1 00:25:16.510 --rc genhtml_function_coverage=1 00:25:16.510 --rc genhtml_legend=1 00:25:16.510 --rc geninfo_all_blocks=1 00:25:16.510 --rc geninfo_unexecuted_blocks=1 00:25:16.510 00:25:16.510 ' 00:25:16.510 06:17:41 keyring_linux -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:25:16.510 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:16.510 --rc genhtml_branch_coverage=1 00:25:16.510 --rc genhtml_function_coverage=1 00:25:16.510 --rc genhtml_legend=1 00:25:16.510 --rc geninfo_all_blocks=1 00:25:16.510 --rc geninfo_unexecuted_blocks=1 00:25:16.510 00:25:16.510 ' 00:25:16.510 06:17:41 keyring_linux -- keyring/linux.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:25:16.510 06:17:41 keyring_linux -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:16.510 06:17:41 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:25:16.510 06:17:41 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:16.510 06:17:41 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:16.510 06:17:41 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:16.510 06:17:41 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:16.510 06:17:41 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:16.510 06:17:41 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:16.510 06:17:41 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:16.510 06:17:41 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:16.510 06:17:41 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:16.510 06:17:41 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:16.510 06:17:41 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:a979a798-a221-4879-b3c4-5aaa753fde06 00:25:16.510 06:17:41 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=a979a798-a221-4879-b3c4-5aaa753fde06 00:25:16.510 06:17:41 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:16.510 06:17:41 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:16.510 06:17:41 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:25:16.510 06:17:41 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:16.510 06:17:41 keyring_linux -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:16.510 06:17:41 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:25:16.510 06:17:41 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:16.510 06:17:41 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:16.510 06:17:41 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:16.510 06:17:41 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:16.510 06:17:41 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:16.510 06:17:41 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:16.510 06:17:41 keyring_linux -- paths/export.sh@5 -- # export PATH 00:25:16.510 06:17:41 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:16.510 06:17:41 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:25:16.510 06:17:41 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:16.510 06:17:41 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:16.510 06:17:41 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:16.510 06:17:41 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:16.510 06:17:41 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:16.510 06:17:41 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:16.510 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:16.510 06:17:41 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:16.510 06:17:41 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:16.510 06:17:41 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:16.510 06:17:41 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:25:16.510 06:17:41 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:25:16.510 06:17:41 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:25:16.510 06:17:41 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:25:16.511 06:17:41 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:25:16.511 06:17:41 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:25:16.511 06:17:41 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:25:16.511 06:17:41 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:25:16.511 06:17:41 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:25:16.511 06:17:41 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:25:16.511 06:17:41 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:25:16.511 06:17:41 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:25:16.511 06:17:41 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:25:16.511 06:17:41 keyring_linux -- nvmf/common.sh@739 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:25:16.511 06:17:41 keyring_linux -- nvmf/common.sh@726 -- # local prefix key digest 00:25:16.511 06:17:41 keyring_linux -- nvmf/common.sh@728 -- # prefix=NVMeTLSkey-1 00:25:16.511 06:17:41 keyring_linux -- nvmf/common.sh@728 -- # key=00112233445566778899aabbccddeeff 00:25:16.511 06:17:41 keyring_linux -- nvmf/common.sh@728 -- # digest=0 00:25:16.511 06:17:41 keyring_linux -- nvmf/common.sh@729 -- # python - 00:25:16.511 06:17:41 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:25:16.511 /tmp/:spdk-test:key0 00:25:16.511 06:17:41 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:25:16.511 06:17:41 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:25:16.511 06:17:41 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:25:16.511 06:17:41 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:25:16.511 06:17:41 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:25:16.511 06:17:41 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:25:16.511 06:17:41 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:25:16.511 06:17:41 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:25:16.511 06:17:41 keyring_linux -- nvmf/common.sh@739 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:25:16.511 06:17:41 keyring_linux -- nvmf/common.sh@726 -- # local prefix key digest 00:25:16.511 06:17:41 keyring_linux -- nvmf/common.sh@728 -- # prefix=NVMeTLSkey-1 00:25:16.511 06:17:41 keyring_linux -- nvmf/common.sh@728 -- # key=112233445566778899aabbccddeeff00 00:25:16.511 06:17:41 keyring_linux -- nvmf/common.sh@728 -- # digest=0 00:25:16.511 06:17:41 keyring_linux -- nvmf/common.sh@729 -- # python - 00:25:16.511 06:17:42 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:25:16.511 /tmp/:spdk-test:key1 00:25:16.511 06:17:42 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:25:16.511 06:17:42 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=99805 00:25:16.511 06:17:42 keyring_linux -- keyring/linux.sh@50 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:16.511 06:17:42 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 99805 00:25:16.511 06:17:42 keyring_linux -- common/autotest_common.sh@831 -- # '[' -z 99805 ']' 00:25:16.511 06:17:42 keyring_linux -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:16.511 06:17:42 keyring_linux -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:16.511 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:16.511 06:17:42 keyring_linux -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:16.511 06:17:42 keyring_linux -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:16.511 06:17:42 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:25:16.511 [2024-10-01 06:17:42.077869] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:25:16.511 [2024-10-01 06:17:42.078000] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99805 ] 00:25:16.770 [2024-10-01 06:17:42.210273] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:16.770 [2024-10-01 06:17:42.246658] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:25:16.770 [2024-10-01 06:17:42.285639] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:25:17.733 06:17:43 keyring_linux -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:17.733 06:17:43 keyring_linux -- common/autotest_common.sh@864 -- # return 0 00:25:17.733 06:17:43 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:25:17.733 06:17:43 keyring_linux -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:17.733 06:17:43 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:25:17.733 [2024-10-01 06:17:43.060420] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:17.733 null0 00:25:17.733 [2024-10-01 06:17:43.092358] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:25:17.733 [2024-10-01 06:17:43.092605] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:25:17.733 06:17:43 keyring_linux -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:17.733 06:17:43 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:25:17.733 198395371 00:25:17.733 06:17:43 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:25:17.733 28054299 00:25:17.733 06:17:43 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=99825 00:25:17.733 06:17:43 keyring_linux -- keyring/linux.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:25:17.733 06:17:43 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 99825 /var/tmp/bperf.sock 00:25:17.733 06:17:43 keyring_linux -- common/autotest_common.sh@831 -- # '[' -z 99825 ']' 00:25:17.733 06:17:43 keyring_linux -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:17.733 06:17:43 keyring_linux -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:17.733 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:17.734 06:17:43 keyring_linux -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:17.734 06:17:43 keyring_linux -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:17.734 06:17:43 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:25:17.734 [2024-10-01 06:17:43.176557] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 22.11.4 initialization... 00:25:17.734 [2024-10-01 06:17:43.176667] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99825 ] 00:25:17.734 [2024-10-01 06:17:43.319479] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:17.995 [2024-10-01 06:17:43.362079] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:25:17.995 06:17:43 keyring_linux -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:17.995 06:17:43 keyring_linux -- common/autotest_common.sh@864 -- # return 0 00:25:17.995 06:17:43 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:25:17.995 06:17:43 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:25:18.254 06:17:43 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:25:18.254 06:17:43 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:25:18.513 [2024-10-01 06:17:43.942153] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:25:18.513 06:17:43 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:25:18.513 06:17:43 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:25:18.773 [2024-10-01 06:17:44.209681] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:18.773 nvme0n1 00:25:18.773 06:17:44 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:25:18.773 06:17:44 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:25:18.773 06:17:44 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:25:18.773 06:17:44 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:25:18.773 06:17:44 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:25:18.773 06:17:44 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:19.032 06:17:44 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:25:19.032 06:17:44 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:25:19.032 06:17:44 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:25:19.032 06:17:44 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:25:19.032 06:17:44 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:25:19.032 06:17:44 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:25:19.032 06:17:44 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:19.291 06:17:44 keyring_linux -- keyring/linux.sh@25 -- # sn=198395371 00:25:19.291 06:17:44 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:25:19.291 06:17:44 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:25:19.291 06:17:44 keyring_linux -- keyring/linux.sh@26 -- # [[ 198395371 == \1\9\8\3\9\5\3\7\1 ]] 00:25:19.291 06:17:44 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 198395371 00:25:19.291 06:17:44 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:25:19.291 06:17:44 keyring_linux -- keyring/linux.sh@79 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:19.549 Running I/O for 1 seconds... 00:25:20.487 13919.00 IOPS, 54.37 MiB/s 00:25:20.487 Latency(us) 00:25:20.487 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:20.487 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:25:20.487 nvme0n1 : 1.01 13927.17 54.40 0.00 0.00 9147.55 3023.59 12392.26 00:25:20.487 =================================================================================================================== 00:25:20.487 Total : 13927.17 54.40 0.00 0.00 9147.55 3023.59 12392.26 00:25:20.487 { 00:25:20.487 "results": [ 00:25:20.487 { 00:25:20.487 "job": "nvme0n1", 00:25:20.487 "core_mask": "0x2", 00:25:20.487 "workload": "randread", 00:25:20.487 "status": "finished", 00:25:20.487 "queue_depth": 128, 00:25:20.487 "io_size": 4096, 00:25:20.487 "runtime": 1.008676, 00:25:20.487 "iops": 13927.167891374436, 00:25:20.487 "mibps": 54.40299957568139, 00:25:20.487 "io_failed": 0, 00:25:20.487 "io_timeout": 0, 00:25:20.487 "avg_latency_us": 9147.546373990474, 00:25:20.487 "min_latency_us": 3023.592727272727, 00:25:20.487 "max_latency_us": 12392.261818181818 00:25:20.487 } 00:25:20.487 ], 00:25:20.487 "core_count": 1 00:25:20.487 } 00:25:20.487 06:17:46 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:25:20.487 06:17:46 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:25:20.746 06:17:46 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:25:20.746 06:17:46 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:25:20.746 06:17:46 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:25:20.746 06:17:46 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:25:20.746 06:17:46 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:25:20.746 06:17:46 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:25:21.316 06:17:46 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:25:21.316 06:17:46 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:25:21.316 06:17:46 keyring_linux -- keyring/linux.sh@23 -- # return 00:25:21.316 06:17:46 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:25:21.316 06:17:46 keyring_linux -- common/autotest_common.sh@650 -- # local es=0 00:25:21.316 06:17:46 keyring_linux -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:25:21.316 06:17:46 keyring_linux -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:25:21.316 06:17:46 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:21.316 06:17:46 keyring_linux -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:25:21.316 06:17:46 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:21.316 06:17:46 keyring_linux -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:25:21.316 06:17:46 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:25:21.316 [2024-10-01 06:17:46.895928] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:25:21.316 [2024-10-01 06:17:46.896388] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa3ef30 (107): Transport endpoint is not connected 00:25:21.316 [2024-10-01 06:17:46.897376] nvme_tcp.c:2196:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa3ef30 (9): Bad file descriptor 00:25:21.316 [2024-10-01 06:17:46.898372] nvme_ctrlr.c:4193:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:21.316 [2024-10-01 06:17:46.898412] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:25:21.316 [2024-10-01 06:17:46.898422] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:25:21.316 [2024-10-01 06:17:46.898433] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:21.316 request: 00:25:21.316 { 00:25:21.316 "name": "nvme0", 00:25:21.316 "trtype": "tcp", 00:25:21.316 "traddr": "127.0.0.1", 00:25:21.316 "adrfam": "ipv4", 00:25:21.316 "trsvcid": "4420", 00:25:21.316 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:21.316 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:21.316 "prchk_reftag": false, 00:25:21.316 "prchk_guard": false, 00:25:21.316 "hdgst": false, 00:25:21.316 "ddgst": false, 00:25:21.316 "psk": ":spdk-test:key1", 00:25:21.316 "allow_unrecognized_csi": false, 00:25:21.316 "method": "bdev_nvme_attach_controller", 00:25:21.316 "req_id": 1 00:25:21.316 } 00:25:21.316 Got JSON-RPC error response 00:25:21.316 response: 00:25:21.316 { 00:25:21.316 "code": -5, 00:25:21.316 "message": "Input/output error" 00:25:21.316 } 00:25:21.316 06:17:46 keyring_linux -- common/autotest_common.sh@653 -- # es=1 00:25:21.316 06:17:46 keyring_linux -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:21.316 06:17:46 keyring_linux -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:21.316 06:17:46 keyring_linux -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:21.316 06:17:46 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:25:21.316 06:17:46 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:25:21.316 06:17:46 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:25:21.316 06:17:46 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:25:21.316 06:17:46 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:25:21.316 06:17:46 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:25:21.316 06:17:46 keyring_linux -- keyring/linux.sh@33 -- # sn=198395371 00:25:21.316 06:17:46 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 198395371 00:25:21.316 1 links removed 00:25:21.316 06:17:46 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:25:21.316 06:17:46 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:25:21.316 06:17:46 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:25:21.316 06:17:46 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:25:21.316 06:17:46 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:25:21.576 06:17:46 keyring_linux -- keyring/linux.sh@33 -- # sn=28054299 00:25:21.576 06:17:46 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 28054299 00:25:21.576 1 links removed 00:25:21.576 06:17:46 keyring_linux -- keyring/linux.sh@41 -- # killprocess 99825 00:25:21.576 06:17:46 keyring_linux -- common/autotest_common.sh@950 -- # '[' -z 99825 ']' 00:25:21.576 06:17:46 keyring_linux -- common/autotest_common.sh@954 -- # kill -0 99825 00:25:21.576 06:17:46 keyring_linux -- common/autotest_common.sh@955 -- # uname 00:25:21.576 06:17:46 keyring_linux -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:21.576 06:17:46 keyring_linux -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 99825 00:25:21.576 06:17:46 keyring_linux -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:25:21.576 06:17:46 keyring_linux -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:25:21.576 killing process with pid 99825 00:25:21.576 06:17:46 keyring_linux -- common/autotest_common.sh@968 -- # echo 'killing process with pid 99825' 00:25:21.576 Received shutdown signal, test time was about 1.000000 seconds 00:25:21.576 00:25:21.576 Latency(us) 00:25:21.576 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:21.576 =================================================================================================================== 00:25:21.576 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:21.576 06:17:46 keyring_linux -- common/autotest_common.sh@969 -- # kill 99825 00:25:21.576 06:17:46 keyring_linux -- common/autotest_common.sh@974 -- # wait 99825 00:25:21.576 06:17:47 keyring_linux -- keyring/linux.sh@42 -- # killprocess 99805 00:25:21.577 06:17:47 keyring_linux -- common/autotest_common.sh@950 -- # '[' -z 99805 ']' 00:25:21.577 06:17:47 keyring_linux -- common/autotest_common.sh@954 -- # kill -0 99805 00:25:21.577 06:17:47 keyring_linux -- common/autotest_common.sh@955 -- # uname 00:25:21.577 06:17:47 keyring_linux -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:21.577 06:17:47 keyring_linux -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 99805 00:25:21.577 06:17:47 keyring_linux -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:21.577 06:17:47 keyring_linux -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:21.577 killing process with pid 99805 00:25:21.577 06:17:47 keyring_linux -- common/autotest_common.sh@968 -- # echo 'killing process with pid 99805' 00:25:21.577 06:17:47 keyring_linux -- common/autotest_common.sh@969 -- # kill 99805 00:25:21.577 06:17:47 keyring_linux -- common/autotest_common.sh@974 -- # wait 99805 00:25:21.836 00:25:21.836 real 0m5.678s 00:25:21.836 user 0m11.163s 00:25:21.836 sys 0m1.422s 00:25:21.836 06:17:47 keyring_linux -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:21.836 06:17:47 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:25:21.836 ************************************ 00:25:21.836 END TEST keyring_linux 00:25:21.836 ************************************ 00:25:21.836 06:17:47 -- spdk/autotest.sh@307 -- # '[' 0 -eq 1 ']' 00:25:21.836 06:17:47 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:25:21.836 06:17:47 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:25:21.836 06:17:47 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:25:21.836 06:17:47 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:25:21.836 06:17:47 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:25:21.836 06:17:47 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:25:21.836 06:17:47 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:25:21.836 06:17:47 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:25:21.836 06:17:47 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:25:21.836 06:17:47 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:25:21.836 06:17:47 -- spdk/autotest.sh@362 -- # [[ 0 -eq 1 ]] 00:25:21.836 06:17:47 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:25:21.836 06:17:47 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:25:21.836 06:17:47 -- spdk/autotest.sh@374 -- # [[ '' -eq 1 ]] 00:25:21.836 06:17:47 -- spdk/autotest.sh@381 -- # trap - SIGINT SIGTERM EXIT 00:25:21.836 06:17:47 -- spdk/autotest.sh@383 -- # timing_enter post_cleanup 00:25:21.836 06:17:47 -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:21.836 06:17:47 -- common/autotest_common.sh@10 -- # set +x 00:25:21.836 06:17:47 -- spdk/autotest.sh@384 -- # autotest_cleanup 00:25:21.836 06:17:47 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:25:21.836 06:17:47 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:25:21.836 06:17:47 -- common/autotest_common.sh@10 -- # set +x 00:25:23.741 INFO: APP EXITING 00:25:23.741 INFO: killing all VMs 00:25:23.741 INFO: killing vhost app 00:25:23.741 INFO: EXIT DONE 00:25:24.678 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:25:24.678 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:25:24.678 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:25:25.243 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:25:25.243 Cleaning 00:25:25.243 Removing: /var/run/dpdk/spdk0/config 00:25:25.243 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:25:25.244 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:25:25.244 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:25:25.244 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:25:25.244 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:25:25.244 Removing: /var/run/dpdk/spdk0/hugepage_info 00:25:25.244 Removing: /var/run/dpdk/spdk1/config 00:25:25.244 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:25:25.244 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:25:25.244 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:25:25.244 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:25:25.244 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:25:25.244 Removing: /var/run/dpdk/spdk1/hugepage_info 00:25:25.244 Removing: /var/run/dpdk/spdk2/config 00:25:25.244 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:25:25.244 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:25:25.503 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:25:25.503 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:25:25.503 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:25:25.503 Removing: /var/run/dpdk/spdk2/hugepage_info 00:25:25.503 Removing: /var/run/dpdk/spdk3/config 00:25:25.503 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:25:25.503 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:25:25.503 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:25:25.503 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:25:25.503 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:25:25.503 Removing: /var/run/dpdk/spdk3/hugepage_info 00:25:25.503 Removing: /var/run/dpdk/spdk4/config 00:25:25.503 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:25:25.503 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:25:25.503 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:25:25.503 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:25:25.503 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:25:25.503 Removing: /var/run/dpdk/spdk4/hugepage_info 00:25:25.503 Removing: /dev/shm/nvmf_trace.0 00:25:25.503 Removing: /dev/shm/spdk_tgt_trace.pid68955 00:25:25.503 Removing: /var/run/dpdk/spdk0 00:25:25.503 Removing: /var/run/dpdk/spdk1 00:25:25.503 Removing: /var/run/dpdk/spdk2 00:25:25.503 Removing: /var/run/dpdk/spdk3 00:25:25.503 Removing: /var/run/dpdk/spdk4 00:25:25.503 Removing: /var/run/dpdk/spdk_pid68808 00:25:25.503 Removing: /var/run/dpdk/spdk_pid68955 00:25:25.503 Removing: /var/run/dpdk/spdk_pid69148 00:25:25.503 Removing: /var/run/dpdk/spdk_pid69235 00:25:25.503 Removing: /var/run/dpdk/spdk_pid69255 00:25:25.503 Removing: /var/run/dpdk/spdk_pid69359 00:25:25.503 Removing: /var/run/dpdk/spdk_pid69369 00:25:25.503 Removing: /var/run/dpdk/spdk_pid69503 00:25:25.503 Removing: /var/run/dpdk/spdk_pid69699 00:25:25.503 Removing: /var/run/dpdk/spdk_pid69853 00:25:25.503 Removing: /var/run/dpdk/spdk_pid69925 00:25:25.503 Removing: /var/run/dpdk/spdk_pid70002 00:25:25.503 Removing: /var/run/dpdk/spdk_pid70095 00:25:25.503 Removing: /var/run/dpdk/spdk_pid70172 00:25:25.503 Removing: /var/run/dpdk/spdk_pid70206 00:25:25.503 Removing: /var/run/dpdk/spdk_pid70236 00:25:25.503 Removing: /var/run/dpdk/spdk_pid70306 00:25:25.503 Removing: /var/run/dpdk/spdk_pid70403 00:25:25.503 Removing: /var/run/dpdk/spdk_pid70838 00:25:25.503 Removing: /var/run/dpdk/spdk_pid70877 00:25:25.503 Removing: /var/run/dpdk/spdk_pid70915 00:25:25.503 Removing: /var/run/dpdk/spdk_pid70929 00:25:25.503 Removing: /var/run/dpdk/spdk_pid70984 00:25:25.503 Removing: /var/run/dpdk/spdk_pid70987 00:25:25.503 Removing: /var/run/dpdk/spdk_pid71054 00:25:25.503 Removing: /var/run/dpdk/spdk_pid71057 00:25:25.503 Removing: /var/run/dpdk/spdk_pid71108 00:25:25.503 Removing: /var/run/dpdk/spdk_pid71113 00:25:25.503 Removing: /var/run/dpdk/spdk_pid71156 00:25:25.503 Removing: /var/run/dpdk/spdk_pid71179 00:25:25.503 Removing: /var/run/dpdk/spdk_pid71309 00:25:25.503 Removing: /var/run/dpdk/spdk_pid71339 00:25:25.503 Removing: /var/run/dpdk/spdk_pid71427 00:25:25.503 Removing: /var/run/dpdk/spdk_pid71754 00:25:25.503 Removing: /var/run/dpdk/spdk_pid71766 00:25:25.503 Removing: /var/run/dpdk/spdk_pid71797 00:25:25.503 Removing: /var/run/dpdk/spdk_pid71810 00:25:25.503 Removing: /var/run/dpdk/spdk_pid71830 00:25:25.503 Removing: /var/run/dpdk/spdk_pid71845 00:25:25.503 Removing: /var/run/dpdk/spdk_pid71858 00:25:25.503 Removing: /var/run/dpdk/spdk_pid71874 00:25:25.503 Removing: /var/run/dpdk/spdk_pid71893 00:25:25.503 Removing: /var/run/dpdk/spdk_pid71906 00:25:25.503 Removing: /var/run/dpdk/spdk_pid71922 00:25:25.503 Removing: /var/run/dpdk/spdk_pid71941 00:25:25.503 Removing: /var/run/dpdk/spdk_pid71954 00:25:25.503 Removing: /var/run/dpdk/spdk_pid71970 00:25:25.503 Removing: /var/run/dpdk/spdk_pid71989 00:25:25.503 Removing: /var/run/dpdk/spdk_pid72002 00:25:25.503 Removing: /var/run/dpdk/spdk_pid72018 00:25:25.503 Removing: /var/run/dpdk/spdk_pid72037 00:25:25.503 Removing: /var/run/dpdk/spdk_pid72049 00:25:25.503 Removing: /var/run/dpdk/spdk_pid72066 00:25:25.503 Removing: /var/run/dpdk/spdk_pid72095 00:25:25.503 Removing: /var/run/dpdk/spdk_pid72110 00:25:25.503 Removing: /var/run/dpdk/spdk_pid72138 00:25:25.503 Removing: /var/run/dpdk/spdk_pid72206 00:25:25.503 Removing: /var/run/dpdk/spdk_pid72234 00:25:25.503 Removing: /var/run/dpdk/spdk_pid72244 00:25:25.763 Removing: /var/run/dpdk/spdk_pid72267 00:25:25.763 Removing: /var/run/dpdk/spdk_pid72282 00:25:25.763 Removing: /var/run/dpdk/spdk_pid72284 00:25:25.763 Removing: /var/run/dpdk/spdk_pid72327 00:25:25.763 Removing: /var/run/dpdk/spdk_pid72340 00:25:25.763 Removing: /var/run/dpdk/spdk_pid72364 00:25:25.763 Removing: /var/run/dpdk/spdk_pid72378 00:25:25.763 Removing: /var/run/dpdk/spdk_pid72382 00:25:25.763 Removing: /var/run/dpdk/spdk_pid72391 00:25:25.763 Removing: /var/run/dpdk/spdk_pid72401 00:25:25.763 Removing: /var/run/dpdk/spdk_pid72405 00:25:25.763 Removing: /var/run/dpdk/spdk_pid72420 00:25:25.763 Removing: /var/run/dpdk/spdk_pid72424 00:25:25.763 Removing: /var/run/dpdk/spdk_pid72451 00:25:25.763 Removing: /var/run/dpdk/spdk_pid72479 00:25:25.763 Removing: /var/run/dpdk/spdk_pid72483 00:25:25.763 Removing: /var/run/dpdk/spdk_pid72517 00:25:25.763 Removing: /var/run/dpdk/spdk_pid72521 00:25:25.763 Removing: /var/run/dpdk/spdk_pid72534 00:25:25.763 Removing: /var/run/dpdk/spdk_pid72571 00:25:25.763 Removing: /var/run/dpdk/spdk_pid72577 00:25:25.763 Removing: /var/run/dpdk/spdk_pid72609 00:25:25.763 Removing: /var/run/dpdk/spdk_pid72611 00:25:25.763 Removing: /var/run/dpdk/spdk_pid72624 00:25:25.763 Removing: /var/run/dpdk/spdk_pid72626 00:25:25.763 Removing: /var/run/dpdk/spdk_pid72634 00:25:25.763 Removing: /var/run/dpdk/spdk_pid72641 00:25:25.763 Removing: /var/run/dpdk/spdk_pid72643 00:25:25.763 Removing: /var/run/dpdk/spdk_pid72656 00:25:25.763 Removing: /var/run/dpdk/spdk_pid72727 00:25:25.763 Removing: /var/run/dpdk/spdk_pid72769 00:25:25.763 Removing: /var/run/dpdk/spdk_pid72881 00:25:25.763 Removing: /var/run/dpdk/spdk_pid72917 00:25:25.763 Removing: /var/run/dpdk/spdk_pid72956 00:25:25.763 Removing: /var/run/dpdk/spdk_pid72970 00:25:25.763 Removing: /var/run/dpdk/spdk_pid72992 00:25:25.763 Removing: /var/run/dpdk/spdk_pid73007 00:25:25.763 Removing: /var/run/dpdk/spdk_pid73038 00:25:25.763 Removing: /var/run/dpdk/spdk_pid73054 00:25:25.763 Removing: /var/run/dpdk/spdk_pid73132 00:25:25.763 Removing: /var/run/dpdk/spdk_pid73148 00:25:25.763 Removing: /var/run/dpdk/spdk_pid73186 00:25:25.763 Removing: /var/run/dpdk/spdk_pid73248 00:25:25.763 Removing: /var/run/dpdk/spdk_pid73299 00:25:25.763 Removing: /var/run/dpdk/spdk_pid73329 00:25:25.763 Removing: /var/run/dpdk/spdk_pid73423 00:25:25.763 Removing: /var/run/dpdk/spdk_pid73464 00:25:25.763 Removing: /var/run/dpdk/spdk_pid73498 00:25:25.763 Removing: /var/run/dpdk/spdk_pid73724 00:25:25.763 Removing: /var/run/dpdk/spdk_pid73811 00:25:25.763 Removing: /var/run/dpdk/spdk_pid73845 00:25:25.763 Removing: /var/run/dpdk/spdk_pid73869 00:25:25.763 Removing: /var/run/dpdk/spdk_pid73908 00:25:25.763 Removing: /var/run/dpdk/spdk_pid73936 00:25:25.763 Removing: /var/run/dpdk/spdk_pid73975 00:25:25.763 Removing: /var/run/dpdk/spdk_pid74001 00:25:25.763 Removing: /var/run/dpdk/spdk_pid74389 00:25:25.763 Removing: /var/run/dpdk/spdk_pid74429 00:25:25.763 Removing: /var/run/dpdk/spdk_pid74766 00:25:25.763 Removing: /var/run/dpdk/spdk_pid75219 00:25:25.763 Removing: /var/run/dpdk/spdk_pid75478 00:25:25.763 Removing: /var/run/dpdk/spdk_pid76313 00:25:25.763 Removing: /var/run/dpdk/spdk_pid77216 00:25:25.763 Removing: /var/run/dpdk/spdk_pid77340 00:25:25.763 Removing: /var/run/dpdk/spdk_pid77402 00:25:25.763 Removing: /var/run/dpdk/spdk_pid78808 00:25:25.763 Removing: /var/run/dpdk/spdk_pid79117 00:25:25.763 Removing: /var/run/dpdk/spdk_pid82793 00:25:25.763 Removing: /var/run/dpdk/spdk_pid83163 00:25:25.763 Removing: /var/run/dpdk/spdk_pid83272 00:25:25.763 Removing: /var/run/dpdk/spdk_pid83399 00:25:25.763 Removing: /var/run/dpdk/spdk_pid83420 00:25:25.763 Removing: /var/run/dpdk/spdk_pid83441 00:25:25.763 Removing: /var/run/dpdk/spdk_pid83462 00:25:25.763 Removing: /var/run/dpdk/spdk_pid83541 00:25:25.763 Removing: /var/run/dpdk/spdk_pid83669 00:25:25.763 Removing: /var/run/dpdk/spdk_pid83805 00:25:25.763 Removing: /var/run/dpdk/spdk_pid83879 00:25:25.763 Removing: /var/run/dpdk/spdk_pid84067 00:25:25.763 Removing: /var/run/dpdk/spdk_pid84135 00:25:25.763 Removing: /var/run/dpdk/spdk_pid84216 00:25:25.763 Removing: /var/run/dpdk/spdk_pid84562 00:25:25.763 Removing: /var/run/dpdk/spdk_pid84968 00:25:26.022 Removing: /var/run/dpdk/spdk_pid84969 00:25:26.022 Removing: /var/run/dpdk/spdk_pid84970 00:25:26.022 Removing: /var/run/dpdk/spdk_pid85238 00:25:26.022 Removing: /var/run/dpdk/spdk_pid85476 00:25:26.022 Removing: /var/run/dpdk/spdk_pid85488 00:25:26.023 Removing: /var/run/dpdk/spdk_pid87845 00:25:26.023 Removing: /var/run/dpdk/spdk_pid87847 00:25:26.023 Removing: /var/run/dpdk/spdk_pid88170 00:25:26.023 Removing: /var/run/dpdk/spdk_pid88184 00:25:26.023 Removing: /var/run/dpdk/spdk_pid88198 00:25:26.023 Removing: /var/run/dpdk/spdk_pid88229 00:25:26.023 Removing: /var/run/dpdk/spdk_pid88240 00:25:26.023 Removing: /var/run/dpdk/spdk_pid88326 00:25:26.023 Removing: /var/run/dpdk/spdk_pid88333 00:25:26.023 Removing: /var/run/dpdk/spdk_pid88436 00:25:26.023 Removing: /var/run/dpdk/spdk_pid88443 00:25:26.023 Removing: /var/run/dpdk/spdk_pid88546 00:25:26.023 Removing: /var/run/dpdk/spdk_pid88553 00:25:26.023 Removing: /var/run/dpdk/spdk_pid89007 00:25:26.023 Removing: /var/run/dpdk/spdk_pid89056 00:25:26.023 Removing: /var/run/dpdk/spdk_pid89159 00:25:26.023 Removing: /var/run/dpdk/spdk_pid89243 00:25:26.023 Removing: /var/run/dpdk/spdk_pid89587 00:25:26.023 Removing: /var/run/dpdk/spdk_pid89776 00:25:26.023 Removing: /var/run/dpdk/spdk_pid90214 00:25:26.023 Removing: /var/run/dpdk/spdk_pid90756 00:25:26.023 Removing: /var/run/dpdk/spdk_pid91595 00:25:26.023 Removing: /var/run/dpdk/spdk_pid92223 00:25:26.023 Removing: /var/run/dpdk/spdk_pid92235 00:25:26.023 Removing: /var/run/dpdk/spdk_pid94256 00:25:26.023 Removing: /var/run/dpdk/spdk_pid94303 00:25:26.023 Removing: /var/run/dpdk/spdk_pid94356 00:25:26.023 Removing: /var/run/dpdk/spdk_pid94403 00:25:26.023 Removing: /var/run/dpdk/spdk_pid94514 00:25:26.023 Removing: /var/run/dpdk/spdk_pid94574 00:25:26.023 Removing: /var/run/dpdk/spdk_pid94627 00:25:26.023 Removing: /var/run/dpdk/spdk_pid94673 00:25:26.023 Removing: /var/run/dpdk/spdk_pid95025 00:25:26.023 Removing: /var/run/dpdk/spdk_pid96239 00:25:26.023 Removing: /var/run/dpdk/spdk_pid96376 00:25:26.023 Removing: /var/run/dpdk/spdk_pid96606 00:25:26.023 Removing: /var/run/dpdk/spdk_pid97190 00:25:26.023 Removing: /var/run/dpdk/spdk_pid97350 00:25:26.023 Removing: /var/run/dpdk/spdk_pid97507 00:25:26.023 Removing: /var/run/dpdk/spdk_pid97598 00:25:26.023 Removing: /var/run/dpdk/spdk_pid97765 00:25:26.023 Removing: /var/run/dpdk/spdk_pid97875 00:25:26.023 Removing: /var/run/dpdk/spdk_pid98575 00:25:26.023 Removing: /var/run/dpdk/spdk_pid98609 00:25:26.023 Removing: /var/run/dpdk/spdk_pid98640 00:25:26.023 Removing: /var/run/dpdk/spdk_pid98895 00:25:26.023 Removing: /var/run/dpdk/spdk_pid98929 00:25:26.023 Removing: /var/run/dpdk/spdk_pid98960 00:25:26.023 Removing: /var/run/dpdk/spdk_pid99431 00:25:26.023 Removing: /var/run/dpdk/spdk_pid99436 00:25:26.023 Removing: /var/run/dpdk/spdk_pid99685 00:25:26.023 Removing: /var/run/dpdk/spdk_pid99805 00:25:26.023 Removing: /var/run/dpdk/spdk_pid99825 00:25:26.023 Clean 00:25:26.023 06:17:51 -- common/autotest_common.sh@1451 -- # return 0 00:25:26.023 06:17:51 -- spdk/autotest.sh@385 -- # timing_exit post_cleanup 00:25:26.023 06:17:51 -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:26.023 06:17:51 -- common/autotest_common.sh@10 -- # set +x 00:25:26.282 06:17:51 -- spdk/autotest.sh@387 -- # timing_exit autotest 00:25:26.282 06:17:51 -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:26.282 06:17:51 -- common/autotest_common.sh@10 -- # set +x 00:25:26.282 06:17:51 -- spdk/autotest.sh@388 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:25:26.282 06:17:51 -- spdk/autotest.sh@390 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:25:26.282 06:17:51 -- spdk/autotest.sh@390 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:25:26.282 06:17:51 -- spdk/autotest.sh@392 -- # [[ y == y ]] 00:25:26.282 06:17:51 -- spdk/autotest.sh@394 -- # hostname 00:25:26.282 06:17:51 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:25:26.540 geninfo: WARNING: invalid characters removed from testname! 00:25:53.089 06:18:15 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:25:54.027 06:18:19 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:25:56.561 06:18:22 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:25:59.848 06:18:24 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:26:02.380 06:18:27 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:26:04.938 06:18:30 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:26:07.494 06:18:32 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:26:07.494 06:18:32 -- common/autotest_common.sh@1680 -- $ [[ y == y ]] 00:26:07.494 06:18:32 -- common/autotest_common.sh@1681 -- $ lcov --version 00:26:07.494 06:18:32 -- common/autotest_common.sh@1681 -- $ awk '{print $NF}' 00:26:07.494 06:18:33 -- common/autotest_common.sh@1681 -- $ lt 1.15 2 00:26:07.494 06:18:33 -- scripts/common.sh@373 -- $ cmp_versions 1.15 '<' 2 00:26:07.494 06:18:33 -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:26:07.494 06:18:33 -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:26:07.494 06:18:33 -- scripts/common.sh@336 -- $ IFS=.-: 00:26:07.494 06:18:33 -- scripts/common.sh@336 -- $ read -ra ver1 00:26:07.494 06:18:33 -- scripts/common.sh@337 -- $ IFS=.-: 00:26:07.494 06:18:33 -- scripts/common.sh@337 -- $ read -ra ver2 00:26:07.494 06:18:33 -- scripts/common.sh@338 -- $ local 'op=<' 00:26:07.494 06:18:33 -- scripts/common.sh@340 -- $ ver1_l=2 00:26:07.494 06:18:33 -- scripts/common.sh@341 -- $ ver2_l=1 00:26:07.494 06:18:33 -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:26:07.494 06:18:33 -- scripts/common.sh@344 -- $ case "$op" in 00:26:07.494 06:18:33 -- scripts/common.sh@345 -- $ : 1 00:26:07.494 06:18:33 -- scripts/common.sh@364 -- $ (( v = 0 )) 00:26:07.494 06:18:33 -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:07.494 06:18:33 -- scripts/common.sh@365 -- $ decimal 1 00:26:07.494 06:18:33 -- scripts/common.sh@353 -- $ local d=1 00:26:07.494 06:18:33 -- scripts/common.sh@354 -- $ [[ 1 =~ ^[0-9]+$ ]] 00:26:07.494 06:18:33 -- scripts/common.sh@355 -- $ echo 1 00:26:07.494 06:18:33 -- scripts/common.sh@365 -- $ ver1[v]=1 00:26:07.494 06:18:33 -- scripts/common.sh@366 -- $ decimal 2 00:26:07.494 06:18:33 -- scripts/common.sh@353 -- $ local d=2 00:26:07.494 06:18:33 -- scripts/common.sh@354 -- $ [[ 2 =~ ^[0-9]+$ ]] 00:26:07.494 06:18:33 -- scripts/common.sh@355 -- $ echo 2 00:26:07.494 06:18:33 -- scripts/common.sh@366 -- $ ver2[v]=2 00:26:07.494 06:18:33 -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:26:07.494 06:18:33 -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:26:07.494 06:18:33 -- scripts/common.sh@368 -- $ return 0 00:26:07.494 06:18:33 -- common/autotest_common.sh@1682 -- $ lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:07.494 06:18:33 -- common/autotest_common.sh@1694 -- $ export 'LCOV_OPTS= 00:26:07.494 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:07.494 --rc genhtml_branch_coverage=1 00:26:07.494 --rc genhtml_function_coverage=1 00:26:07.494 --rc genhtml_legend=1 00:26:07.494 --rc geninfo_all_blocks=1 00:26:07.494 --rc geninfo_unexecuted_blocks=1 00:26:07.494 00:26:07.494 ' 00:26:07.494 06:18:33 -- common/autotest_common.sh@1694 -- $ LCOV_OPTS=' 00:26:07.494 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:07.494 --rc genhtml_branch_coverage=1 00:26:07.494 --rc genhtml_function_coverage=1 00:26:07.494 --rc genhtml_legend=1 00:26:07.494 --rc geninfo_all_blocks=1 00:26:07.494 --rc geninfo_unexecuted_blocks=1 00:26:07.494 00:26:07.494 ' 00:26:07.494 06:18:33 -- common/autotest_common.sh@1695 -- $ export 'LCOV=lcov 00:26:07.494 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:07.494 --rc genhtml_branch_coverage=1 00:26:07.494 --rc genhtml_function_coverage=1 00:26:07.494 --rc genhtml_legend=1 00:26:07.494 --rc geninfo_all_blocks=1 00:26:07.494 --rc geninfo_unexecuted_blocks=1 00:26:07.494 00:26:07.494 ' 00:26:07.494 06:18:33 -- common/autotest_common.sh@1695 -- $ LCOV='lcov 00:26:07.494 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:07.494 --rc genhtml_branch_coverage=1 00:26:07.494 --rc genhtml_function_coverage=1 00:26:07.494 --rc genhtml_legend=1 00:26:07.494 --rc geninfo_all_blocks=1 00:26:07.494 --rc geninfo_unexecuted_blocks=1 00:26:07.494 00:26:07.494 ' 00:26:07.494 06:18:33 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:07.494 06:18:33 -- scripts/common.sh@15 -- $ shopt -s extglob 00:26:07.494 06:18:33 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:26:07.494 06:18:33 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:07.494 06:18:33 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:07.494 06:18:33 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:07.494 06:18:33 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:07.494 06:18:33 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:07.494 06:18:33 -- paths/export.sh@5 -- $ export PATH 00:26:07.494 06:18:33 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:07.494 06:18:33 -- common/autobuild_common.sh@478 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:26:07.494 06:18:33 -- common/autobuild_common.sh@479 -- $ date +%s 00:26:07.494 06:18:33 -- common/autobuild_common.sh@479 -- $ mktemp -dt spdk_1727763513.XXXXXX 00:26:07.494 06:18:33 -- common/autobuild_common.sh@479 -- $ SPDK_WORKSPACE=/tmp/spdk_1727763513.EVDOrq 00:26:07.494 06:18:33 -- common/autobuild_common.sh@481 -- $ [[ -n '' ]] 00:26:07.494 06:18:33 -- common/autobuild_common.sh@485 -- $ '[' -n v22.11.4 ']' 00:26:07.494 06:18:33 -- common/autobuild_common.sh@486 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:26:07.494 06:18:33 -- common/autobuild_common.sh@486 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:26:07.494 06:18:33 -- common/autobuild_common.sh@492 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:26:07.494 06:18:33 -- common/autobuild_common.sh@494 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:26:07.494 06:18:33 -- common/autobuild_common.sh@495 -- $ get_config_params 00:26:07.494 06:18:33 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:26:07.494 06:18:33 -- common/autotest_common.sh@10 -- $ set +x 00:26:07.494 06:18:33 -- common/autobuild_common.sh@495 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring --with-dpdk=/home/vagrant/spdk_repo/dpdk/build' 00:26:07.494 06:18:33 -- common/autobuild_common.sh@497 -- $ start_monitor_resources 00:26:07.494 06:18:33 -- pm/common@17 -- $ local monitor 00:26:07.494 06:18:33 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:26:07.494 06:18:33 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:26:07.494 06:18:33 -- pm/common@25 -- $ sleep 1 00:26:07.494 06:18:33 -- pm/common@21 -- $ date +%s 00:26:07.494 06:18:33 -- pm/common@21 -- $ date +%s 00:26:07.494 06:18:33 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1727763513 00:26:07.494 06:18:33 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1727763513 00:26:07.754 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1727763513_collect-cpu-load.pm.log 00:26:07.754 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1727763513_collect-vmstat.pm.log 00:26:08.692 06:18:34 -- common/autobuild_common.sh@498 -- $ trap stop_monitor_resources EXIT 00:26:08.692 06:18:34 -- spdk/autopackage.sh@10 -- $ [[ 0 -eq 1 ]] 00:26:08.692 06:18:34 -- spdk/autopackage.sh@14 -- $ timing_finish 00:26:08.692 06:18:34 -- common/autotest_common.sh@736 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:26:08.693 06:18:34 -- common/autotest_common.sh@737 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:26:08.693 06:18:34 -- common/autotest_common.sh@740 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:26:08.693 06:18:34 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:26:08.693 06:18:34 -- pm/common@29 -- $ signal_monitor_resources TERM 00:26:08.693 06:18:34 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:26:08.693 06:18:34 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:26:08.693 06:18:34 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:26:08.693 06:18:34 -- pm/common@44 -- $ pid=101605 00:26:08.693 06:18:34 -- pm/common@50 -- $ kill -TERM 101605 00:26:08.693 06:18:34 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:26:08.693 06:18:34 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:26:08.693 06:18:34 -- pm/common@44 -- $ pid=101606 00:26:08.693 06:18:34 -- pm/common@50 -- $ kill -TERM 101606 00:26:08.693 + [[ -n 5997 ]] 00:26:08.693 + sudo kill 5997 00:26:08.703 [Pipeline] } 00:26:08.720 [Pipeline] // timeout 00:26:08.727 [Pipeline] } 00:26:08.743 [Pipeline] // stage 00:26:08.750 [Pipeline] } 00:26:08.767 [Pipeline] // catchError 00:26:08.778 [Pipeline] stage 00:26:08.780 [Pipeline] { (Stop VM) 00:26:08.794 [Pipeline] sh 00:26:09.075 + vagrant halt 00:26:13.274 ==> default: Halting domain... 00:26:18.561 [Pipeline] sh 00:26:18.841 + vagrant destroy -f 00:26:23.052 ==> default: Removing domain... 00:26:23.061 [Pipeline] sh 00:26:23.337 + mv output /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/output 00:26:23.346 [Pipeline] } 00:26:23.360 [Pipeline] // stage 00:26:23.366 [Pipeline] } 00:26:23.380 [Pipeline] // dir 00:26:23.386 [Pipeline] } 00:26:23.400 [Pipeline] // wrap 00:26:23.408 [Pipeline] } 00:26:23.423 [Pipeline] // catchError 00:26:23.436 [Pipeline] stage 00:26:23.438 [Pipeline] { (Epilogue) 00:26:23.452 [Pipeline] sh 00:26:23.733 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:26:30.338 [Pipeline] catchError 00:26:30.340 [Pipeline] { 00:26:30.352 [Pipeline] sh 00:26:30.631 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:26:30.890 Artifacts sizes are good 00:26:30.899 [Pipeline] } 00:26:30.912 [Pipeline] // catchError 00:26:30.921 [Pipeline] archiveArtifacts 00:26:30.928 Archiving artifacts 00:26:31.120 [Pipeline] cleanWs 00:26:31.131 [WS-CLEANUP] Deleting project workspace... 00:26:31.131 [WS-CLEANUP] Deferred wipeout is used... 00:26:31.137 [WS-CLEANUP] done 00:26:31.139 [Pipeline] } 00:26:31.153 [Pipeline] // stage 00:26:31.158 [Pipeline] } 00:26:31.171 [Pipeline] // node 00:26:31.175 [Pipeline] End of Pipeline 00:26:31.218 Finished: SUCCESS